Advances in E-Learning:
Experiences and Methodologies Francisco J. García Peñalvo University of Salamanca, Spain
InformatIon scIence reference Hershey • New York
Acquisitions Editor: Development Editor: Senior Managing Editor: Managing Editor: Copy Editor: Typesetter: Cover Design: Printed at:
Kristin Klinger Kristin Roth Jennifer Neidig Sara Reed Jeannie Porter Jeff Ash Lisa Tosheff Yurchak Printing Inc.
Published in the United States of America by Information Science Reference (an imprint of IGI Global) 701 E. Chocolate Avenue, Suite 200 Hershey PA 17033 Tel: 717-533-8845 Fax: 717-533-8661 E-mail:
[email protected] Web site: http://www.igi-global.com and in the United Kingdom by Information Science Reference (an imprint of IGI Global) 3 Henrietta Street Covent Garden London WC2E 8LU Tel: 44 20 7240 0856 Fax: 44 20 7379 0609 Web site: http://www.eurospanonline.com Copyright © 2008 by IGI Global. All rights reserved. No part of this publication may be reproduced, stored or distributed in any form or by any means, electronic or mechanical, including photocopying, without written permission from the publisher. Product or company names used in this set are for identification purposes only. Inclusion of the names of the products or companies does not indicate a claim of ownership by IGI Global of the trademark or registered trademark. Library of Congress Cataloging-in-Publication Data Advances in e-learning : experiences and methodologies / Francisco José García-Peñalvo, editor. p. cm. Summary: “This book explores the technical, pedagogical, methodological, tutorial, legal, and emotional aspects of e-learning, considering and analyzing its different application contexts, and providing researchers and practitioners with an innovative view of e-learning as a lifelong learning tool for scholars in both academic and professional spheres”--Provided by publisher. Includes bibliographical references and index. ISBN 978-1-59904-756-0 (hardcover) -- ISBN 978-1-59904-758-4 (ebook) 1. Internet in education. 2. Continuing education--Computer-assisted instruction. I. García-Peñalvo, Francisco José. LB1044.87.A374 2008 371.33’44678--dc22 2007032055
British Cataloguing in Publication Data A Cataloguing in Publication record for this book is available from the British Library. All work contributed to this book set is original material. The views expressed in this book are those of the authors, but not necessarily of the publisher.
If a library purchased a print copy of this publication, please go to http://www.igi-global.com/reference/assets/IGR-eAccess-agreement. pdf for information on activating the library's complimentary electronic access to this publication.
Table of Contents
Preface . ............................................................................................................................................... xiv Acknowledgment . ............................................................................................................................. xxii
Chapter I RAPAD: A Reflective and Participatory Methodology for E-Learning and Lifelong Learning .................................................................................................................................. 1 Ray Webster, Murdoch University, Australia Chapter II A Heideggerian View on E-Learning . .................................................................................................. 30 Sergio Vasquez Bronfman, ESCP-EAP (European School of Management), France Chapter III Philosophical and Epistemological Basis for Building a Quality Online Training Methodology ......................................................................................................................................... 46 Antonio Miguel Seoane Pardo, Universidad de Salamanca, Spain Francisco José García Peñalvo, Universidad de Salamanca, Spain Chapter IV E-Mentoring: An Extended Practice, An Emerging Discipline ........................................................... 61 Angélica Rísquez, University of Limerick, Ireland Chapter V Training Teachers for E-Learning, Beyond ICT Skills Towards Lifelong Learning Requirements: A Case Study ................................................................................................................ 83 Olga Díez, CEAD Santa Cruz de Tenerife, Spain Chapter VI The Role of Institutional Factors in the Formation of E-Learning Practices . ..................................... 96 Ruth Halperin, London School of Economics, UK
Chapter VII E-Learning Value and Student Experiences: A Case Study ................................................................ 112 Krassie Petrova, Auckland University of Technology, New Zealand Rowena Sinclair, Auckland University of Technology, New Zealand Chapter VIII Integrating Technology and Research in Mathematics Education: The Case of E-Learning ...................................................................................................................... 132 Giovannina Albano, Università di Salerno, Italy Pier Luigi Ferrari, Università del Piemonte Orientale, Italy Chapter IX AI Techniques for Monitoring Student Learning Process .................................................................. 149 David Camacho, Universidad Autonoma de Madrid, Spain Álvaro Ortigosa, Universidad Autonoma de Madrid, Spain Estrella Pulido, Universidad Autonoma de Madrid, Spain María D. R-Moreno, Universidad de Alcalá, Spain Chapter X Knowledge Discovery from E-Learning Activities............................................................................. 173 Addisson Salazar, Universidad Politécnica de Valencia, Spain Luis Vergara, Universidad Politécnica de Valencia, Spain Chapter XI Swarm-Based Techniques in E-Learning: Methodologies and Experiences....................................... 199 Sergio Gutiérrez, University Carlos III of Madrid, Spain Abelardo Pardo, University Carlos III of Madrid, Spain Chapter XII E-Learning 2.0: The Learning Community......................................................................................... 213 Luisa M. Regueras, University of Valladolid, Spain Elena Verdú, University of Valladolid, Spain María A. Pérez, University of Valladolid, Spain Juan Pablo de Castro, University of Valladolid, Spain María J. Verdú, University of Valladolid, Spain Chapter XIII Telematic Environments and Competition-Based Methodologies: An Approach to Active Learning......................................................................................................... 232 Elena Verdú, University of Valladolid, Spain Luisa M. Regueras, University of Valladolid, Spain María J. Verdú, University of Valladolid, Spain Juan Pablo de Castro, University of Valladolid, Spain María A. Pérez, University of Valladolid, Spain
Chapter XIV Open Source LMS Customization: A Moodle Statistical Control Application.................................... 250 Miguel Ángel Conde, Universidad de Salamanca, Spain Carlos Muñoz Martín, CLAY Formación Internacional, Spain Alberto Velasco Florines, CLAY Formación Internacional, Spain Chapter XV Evaluation and Effective Learning: Strategic Use of E-Portfolio as an Alternative Assessment at University . .................................................................................................................. 264 Nuria Hernández, Universidad de Oviedo, Spain Chapter XVI Formative Online Assessment in E-Learning...................................................................................... 279 Izaskun Ibabe, University of the Basque Country, Spain Joana Jauregizar, Quality Evaluation and Certification Agency of the Basque University System, Spain Chapter XVII Designing an Online Assessment in E-Learning................................................................................. 301 María José Rodríguez-Conde, Universidad de Salamanca, Spain Chapter XVIII Quality Assessment of E-Facilitators................................................................................................... 318 Evelyn Gullett, U21Global Graduate School for Global Leaders, Singapore Chapter XIX E-QUAL: A Proposal to Measure the Quality of E-Learning Courses . ............................................. 329 Célio Gonçalo Marques, Instituto Politécnico de Tomar, Portugal João Noivo, Universidade do Minho, Portugal
Compilation of References ............................................................................................................... 350 About the Contributors .................................................................................................................... 386 Index.................................................................................................................................................... 394
Detailed Table of Contents
Preface . ............................................................................................................................................... xiv Acknowledgment . ............................................................................................................................. xxii
Chapter I RAPAD: A Reflective and Participatory Methodology for E-Learning and Lifelong Learning .................................................................................................................................. 1 Ray Webster, Murdoch University, Australia This chapter introduces RAPAD, a reflective and participatory methodology for e-learning and lifelong learning. It argues that by engaging in a reflective and participatory design process for a personalized elearning environment, individual students can attain a conceptual change in understanding the learning and e-learning process, especially their own. Students use a framework provided by the concept of a personal cognitive or learning profile and the design and development of a personalized e-learning environment (PELE) to engage with key aspects of their learning. This results in Flexible Student Alignment, a process by which students are better able to match their learning and e-learning characteristics and requirements to the practices, resources, and structures of universities in the emerging knowledge society. The use of Web-based technologies and personal reflection ensure that RAPAD is well-placed to be an adaptive methodology which continues to enhance the process of lifelong learning. Chapter II A Heideggerian View on E-Learning . .................................................................................................. 30 Sergio Vasquez Bronfman, ESCP-EAP (European School of Management), France This chapter introduces some ideas of the German philosopher Martin Heidegger and how they can be applied to e-learning design. It argues that heideggerian thinking (in particular the interpretation done by Hubert Dreyfus) can inspire innovations in e-learning design and implementation by putting practice at the center of knowledge creation, which in the case of professional and corporate education are real work situations. It also points out the limits of distance learning imposed by the nature of human beings. Furthermore, the author hope that Heidegger ideas will not only inform researchers of a better design for e-learning projects, but also illuminate practitioners on how to design e-learning courses aimed at bridging the gap between “knowing” and “doing.”
Chapter III Philosophical and Epistemological Basis for Building a Quality Online Training Methodology ......................................................................................................................................... 46 Antonio Miguel Seoane Pardo, Universidad de Salamanca, Spain Francisco José García Peñalvo, Universidad de Salamanca, Spain This chapter outlines the problem of laying the groundwork for building a suitable online training methodology. In the first place, it points out that most e-learning initiatives are developed without a defined method or an appropriate strategy. It then critically analyzes the role of the constructivist model in relation to this problem, affirming that this explanatory framework is not a method and describing the problems to which this confusion gives rise. Finally, it proposes a theoretical and epistemological framework of reference for building this methodology based on Greek paideía. The authors propose that the search for a reference model such as the one developed in ancient Greece will allow us to develop a method based on the importance of a teaching profile “different” from traditional academic roles and which we call “tutor.” It has many similarities to the figures in charge of monitoring learning both in Homeric epic and Classical Greece. Chapter IV E-Mentoring: An Extended Practice, An Emerging Discipline ........................................................... 61 Angélica Rísquez, University of Limerick, Ireland This chapter integrates existing literature and developments on electronic mentoring to build a constructive view of this modality of mentoring as a qualitatively different concept from its traditional face-to-face version. The concept of e-mentoring is introduced by looking first into the evasive notion of mentoring. Next, some salient e-mentoring experiences are identified. The chapter goes on to note the differences between electronic and face-to-face mentoring, and how the relationship between mentor and mentee is modified by technology in unique and definitive ways. Readers are also presented with a collection of best practices on design, implementation, and evaluation of e-mentoring programs. Finally, some practice and research trends are proposed. In conclusion, the author draws an elemental distinction between both modalities of mentoring, which defines e-mentoring as more than the defective alternative to face-to-face contact. Chapter V Training Teachers for E-Learning, Beyond ICT Skills Towards Lifelong Learning Requirements: A Case Study ................................................................................................................ 83 Olga Díez, CEAD Santa Cruz de Tenerife, Spain This chapter describes an experience in teacher training for e-learning in the field of adult education. It takes into account the models offered by flexible life long learning as the proper way to develop training for teachers in service, considering the advantages of blended learning for the target audience. The chapter discusses the balance between mere ICT skills and pedagogical competences. In this context the learning design should always allow that the teachers in training integrate in their work ICT solutions that fit to the didactic objectives, renew teaching and learning methodology, facilitate communication, give place
to creativity, and allow pupils to learn at their own pace. By doing so, they will be closer to the profile of a tutor online, as a practitioner that successfully takes advantages of the virtual environments for collaborative work and learning communication Chapter VI The Role of Institutional Factors in the Formation of E-Learning Practices . ..................................... 96 Ruth Halperin, London School of Economics, UK6 This chapter explores institutional and socio-organisational factors that influence the adoption and use of learning management systems (LMS) in the context of higher education. It relies on a longitudinal case study to demonstrate the ways in which a set of institutional and organisational factors were drawn into the formation and shaping of e-learning practices. Factors found to figure predominantly include institutional conventions and standards, pre-existing activities and routines, existing resources available to the institution, and, finally, the institution’s organisational culture. The analysis further shows that socioorganisational factors may influence e-learning implementation in various ways, as they both facilitate and hinder the adoption of technology and its consequent use. It is argued that institutional parameters have particular relevance in the context of hybrid modes of e-learning implementation, as they illuminate the tensions involved in integrating technological innovation into an established system. Chapter VII E-Learning Value and Student Experiences: A Case Study ................................................................ 112 Krassie Petrova, Auckland University of Technology, New Zealand Rowena Sinclair, Auckland University of Technology, New Zealand
This chapter focuses on understanding how the value of student learning and the student learning experience could be improved given pertinent environmental and academic constraints of an e-learning case. Believing that a better understanding of student behaviour might help course design, the chapter revisits the outcomes of two studies of e-learning and analyses them further using a framework which conceptualises the value of e-learning from a stakeholder perspective. The main objective of the chapter is to identify some of the important issues and trends related to the perceived e-learning value. The analysis of the emerging and future trends indicates that in the future blending of e-learning and face-toface learning is likely to occur not only along the pedagogical, but also along the technological and the organizational dimensions of e-learning. Therefore, new blended learning and teaching models should emphasise further the alignment of learning with work/life balance. Chapter VIII Integrating Technology and Research in Mathematics Education: The Case of E-Learning ...................................................................................................................... 132 Giovannina Albano, Università di Salerno, Italy Pier Luigi Ferrari, Università del Piemonte Orientale, Italy This chapter is concerned with the integration of research in mathematics education and e-learning. We provide an overview of research on learning processes related to the use of technology and a sketch
of constructive and cooperative methods and their feasibility in an e-learning platform. Moreover, we introduce a framework for dealing with language and representations to interpret students’ behaviours and show examples of teaching activities. Finally some opportunities for future research are outlined. We hope to contribute to overcome the current separation between technology and educational research, as their joint use can provide matchless opportunities for dealing with most of the learning problems related to mathematical concepts as well as to linguistic, metacognitive, and noncognitive factors. Chapter IX AI Techniques for Monitoring Student Learning Process .................................................................. 149 David Camacho, Universidad Autonoma de Madrid, Spain Álvaro Ortigosa, Universidad Autonoma de Madrid, Spain Estrella Pulido, Universidad Autonoma de Madrid, Spain María D. R-Moreno, Universidad de Alcalá, Spain The evolution of new information technologies has originated new possibilities to develop pedagogical methodologies that provide the necessary knowledge and skills in the higher education environment. These technologies are built around the use of Internet and other new technologies, such as virtual education, distance learning, and long-life learning. This chapter focuses on several traditional artificial intelligence (AI) techniques, such as automated planning and scheduling, and how they can be applied to pedagogical and educational environments. The chapter describes both the main issues related to AI techniques and e-learning technologies, and how long-life learning processes and problems can be represented and managed by using an AI-based approach. Chapter X Knowledge Discovery from E-Learning Activities............................................................................. 173 Addisson Salazar, Universidad Politécnica de Valencia, Spain Luis Vergara, Universidad Politécnica de Valencia, Spain This chapter presents a study applied to the analysis of the utilization of learning Web-based resources in a virtual campus. A huge amount of historical Web log data from e-learning activities, such as e-mail exchange, content consulting, forum participation, and chats is processed using a knowledge discovery approach. Data mining techniques as clustering, decision rules, independent component analysis, and neural networks, are used to search for structures or patterns in the data. The results show the detection of learning styles of the students based on a known educational framework, and useful knowledge of global and specific content on academic performance success and failure. From the discovered knowledge, a set of preliminary academic management strategies to improve the e-learning system is outlined. Chapter XI Swarm-Based Techniques in E-Learning: Methodologies and Experiences....................................... 199 Sergio Gutiérrez, University Carlos III of Madrid, Spain Abelardo Pardo, University Carlos III of Madrid, Spain This chapter provides an overview of the use of swarm-intelligence techniques in the field of e-learning. Swarm intelligence is an artificial intelligence technique inspired by the behavior of social insects. Taking
into account that the Internet connects a high number of users with a negligible delay, some of those techniques can be combined with sociology concepts and applied to e-learning. The chapter analyzes several of such applications and exposes their strong and weak points. The authors hope that understanding the concepts used in the applications described in the chapter will not only inform researchers about an emerging trend, but also provide with interesting ideas that can be applied and combined with any e-learning system. Chapter XII E-Learning 2.0: The Learning Community......................................................................................... 213 Luisa M. Regueras, University of Valladolid, Spain Elena Verdú, University of Valladolid, Spain María A. Pérez, University of Valladolid, Spain Juan Pablo de Castro, University of Valladolid, Spain María J. Verdú, University of Valladolid, Spain Nowadays, most of electronic applications, including e-learning, are based on the Internet and the Web. As the Web advances, applications should progress in accordance with it. People in the Internet world have started to talk about Web 2.0. This chapter discusses how the concepts of Web 2.0 can be transferred to e-learning. First, the new trends of the Web (Web 2.0) are introduced and the Web 2.0 technologies are reviewed. Then, it is analysed how Web 2.0 can be transferred and applied to the learning process, in terms of methodologies and tools, and taking into account different scenarios and roles. Next, some good practices and recommendations for E-Learning 2.0 are described. Finally, we present our opinion, conclusions, and proposals about the future trends driving the market. Chapter XIII Telematic Environments and Competition-Based Methodologies: An Approach to Active Learning......................................................................................................... 232 Elena Verdú, University of Valladolid, Spain Luisa M. Regueras, University of Valladolid, Spain María J. Verdú, University of Valladolid, Spain Juan Pablo de Castro, University of Valladolid, Spain María A. Pérez, University of Valladolid, Spain This chapter provides an overview of technology-based competitive active learning. It discusses competitive and collaborative learning and analyzes how adequate the different strategies are for different individual learning styles. First of all, some classifications of learning styles are reviewed. Then, the chapter discusses competitive and collaborative strategies as active learning methodologies and analyzes their effects on students’ outcomes and feelings, according to their learning styles. Next, it shows how networking technology can mitigate the possible negative aspects. All the discussion is supported by significant study cases from the literature. Finally, an innovative system for active competitive and collaborative learning is presented as an example of a telematic versatile learning system.
Chapter XIV Open Source LMS Customization: A Moodle Statistical Control Application.................................... 250 Miguel Ángel Conde, Universidad de Salamanca, Spain Carlos Muñoz Martín, CLAY Formación Internacional, Spain Alberto Velasco Florines, CLAY Formación Internacional, Spain This paper reflects the possibility of doing adaptations on a learning management system (LMS) depending on the necessities of a company or institution. In this case, ACEM allows the definition of course-level and platform-level reports and the automatic generation of certificates and diplomas for Moodle LMS. These adaptations are intended to complement all the different learning platforms by contributing added-value features like the generation of customizable diplomas and certificates and reports, which allow the obtaining information about both grades and participation in every activity of a course. All this necessities are not provided by default. Chapter XV Evaluation and Effective Learning: Strategic Use of E-Portfolio as an Alternative Assessment at University . .................................................................................................................. 264 Nuria Hernández, Universidad de Oviedo, Spain This chapter analyses evaluation as a strategic instrument to promote active and significant learning and how, in that strategy, the use of alternative assessment and technology-aided learning-and-teaching processes could be of great help. There is an important margin to allow the teachers to design the assessment in a strategic manner and modify the nature of the students’ learning activities. So, the central question is analysing whether the use of an electronic portfolio as an assessment tool in the subject “International Economic Relations,” has been used strategically. In other words, is the type of desired learning really being achieved? Is significant and deep learning being stimulated? If not, what kind of learning is being stimulated? How should the assessment be modified to achieve the desired results? To help answer all these questions, we have analysed whether the activities and products which make up the “International Economic Relations” portfolio fulfil the conditions that characterise a strategic evaluation. Chapter XVI Formative Online Assessment in E-Learning...................................................................................... 279 Izaskun Ibabe, University of the Basque Country, Spain Joana Jauregizar, Quality Evaluation and Certification Agency of the Basque University System, Spain This chapter provides an introduction to formative assessment, especially applied within an online or e-learning environment. The characteristics of four strategies of online formative assessment currently most widely used—online adaptive assessment, online self-assessment, online collaborative assessment, and portfolio—are described. References are made throughout recent research about the effectiveness of online formative assessment for optimizing students’ learning. A case study in which a computer-
assisted assessment tool was used to design and apply self-assessment exercises is presented. The chapter emphasizes the idea that all type of assessment needs to be conceptualized as “assessment for learning.” Practical advices are detailed for the planning, development, implementation, and review of quality formative online assessment. Chapter XVII Designing an Online Assessment in E-Learning................................................................................. 301 María José Rodríguez-Conde, Universidad de Salamanca, Spain In this chapter we carry out analysis of the term “assessment,” applied over all the elements which constitute the environment of formation (evaluation), and also particularizing in the assessment of the learning process, developed in the frame of what we call e-learning. The perspective guiding text is of a methodological and pedagogical nature. We try to plan the assessment process in online formation environments dealing in depth with the different elements which constitute it: objectives and functions of assessment, assessment criteria and indicators, people involved and assessment agents, software instruments and tools for the collection of data, and analysis of the information and reports. We raise a discussion about institutional strategies for the incorporation of this e-assessment methodology in higher educational institutions and come to the final conclusions about the validity and appropriateness of the e-learning assessment processes. Chapter XVIII Quality Assessment of E-Facilitators................................................................................................... 318 Evelyn Gullett, U21Global Graduate School for Global Leaders, Singapore Organizations, in particular HR/Training departments, strive to set forth good practices, quality assurance, and improvement on a continuing basis. With the continuous growth of online university programs, it is crucial for e-learning establishments to include service quality assessments along with mechanisms to help e-facilitators consistently maintain the highest quality standard when lecturing, teaching, guiding, administering, and supporting the online learner. This chapter discusses the application of an e-quality assessment matrix (e-QAM) as part of a quality assessment model that promotes continuous improvement of the e-learning environment. This model will serve as a tool for online universities and organizations to achieve a base standard of consistent quality that is essential for program accreditation and satisfaction of global customers. Chapter XIX E-QUAL: A Proposal to Measure the Quality of E-Learning Courses . ............................................. 329 Célio Gonçalo Marques, Instituto Politécnico de Tomar, Portugal João Noivo, Universidade do Minho, Portugal This chapter presents a method to measure the quality of e-learning courses. An introduction is first presented on the problematics of quality in e-learning emphasizing the importance of considering the learners’ needs in all the development and implementation stages. Next several projects are mentioned, which are related to quality in e-learning, and some of the most important existing models are described. Finally, a new proposal is presented, the e-Qual model, which is structured into four areas: learning
contents, learning contexts, processes, and results. With this chapter, the authors aim, not only to draw the attention to this complicated issue but above all to contribute to a higher credibility of e-learning proposing a new model that stands out for its simplicity and flexibility for analyzing different pedagogical models.
Compilation of References ............................................................................................................... 350 About the Contributors .................................................................................................................... 386 Index.................................................................................................................................................... 394
xiv
Preface
IntroductIon Web-based training, actually known as e-learning, has experienced a remarkable evolution and growth in the last few years. This is certainly due to enormous advances in information and communication technologies (ICT), and also to the increasing demands to make training compatible with the professional and personal lives of any citizen, and not just something created for young students looking for a degree. Training must be available as a lifelong experience, both for academic studies and for nonformal or informal situations. E-learning is supposed to be an excellent solution for the old problem of mass education, beyond that of an impractical apprenticeship method, since there are far too many knowledge seekers and not enough knowledge providers. The initial increase and even euphoria associated with e-learning, due to the new possibilities it seemed to offer, gave place to a generalized feeling of disillusionment, because results did not show e-learning to be a tool for quality training, and ROI were not really satisfactory. This was contrary to what we one could have thought initially (García-Peñalvo & López-Eire, 2007). There exists no single reason that can explain the failure of so many e-learning initiatives. Perhaps lack of maturity could be the most realistic and global cause. This situation was mainly caused, among other variables, by a pre-eminence of technological factors above other methodological or didactical elements. E-learning started as something mainly technological, not as an activity whose aim was human learning. In fact, most books on the subject show this unbalance clearly because human aspects are considered as if they were unnecessary or, in many cases, because the human factor in e-learning is considered different from any other learning modality. Consequently, the inefficiency of e-learning seemed to be due to technological elements, because the responsibility of success or failure in e-learning processes depended on the technological tools available. This was, of course, not true. Rosenberg (2006) points out very well this situation presenting the evolution of e-learning field in three phases. The first concerns itself with contents, that is, with the quantity of courses, and with the investment in technology needed to deliver them. This effort is focused on technology itself, taking as criteria for success how much you do, how quickly you do it, and how many courses you offer. A second stage is about quality and impact factors, and in this way success is related to innovative instructional applications, learning by doing models, and higher cost-benefit ratios. Finally, the third phase tackles business performance to design more comprehensive solutions that include training, improved knowledge sharing, and offer more intelligent ways of collaboration and interaction, all in the context of work. Business measures like productivity, customer and employee satisfaction, organizational agility, and marketplace performance are the metrics that matter here. The real situation is that many organizations that are bogged down in the first stage. They have introduced different kinds of technology artifacts in a variety of innovative ways, and have met widely varying levels of success. Unfortunately, there are too many examples that show a very disturbing situation:
xv
these organizations do not get a reasonable relationship between investments in training and the results they obtain. This situation presents us with “black and white” e-learning, as Martínez (2006) says. In spite of everything, the growth of e-learning is unstoppable, and every important institution (academic, enterprise, or otherwise) knows about the necessity of creating and developing a department or service specially devoted to this subject. E-learning deserves to be considered as real revolution, “The Globalization of Training.” This is not only because this sort of training is given on the Internet, but also because of the implication of entities very different from those traditionally “authorized” to do so, that is, academic institutions. Any institution (not just academia) can plan its own training strategy, and so learning is now possible anytime and anywhere. Actual perspectives about e-learning initiatives are more realistic, and show a more mature conception of this field, but there is still a long way to go. The idea of “quality in e-learning” must guide us if we want to meet successfully our educational challenges. In order to show possibly successful ways to plan and carry out such a complex project, we are going to study in depth the most relevant obstacles that hinder the e-learning process. After this, as a preface to the practical knowledge and contrasted high-value experiences enclosed in the next chapters, we can propose a complete e-learning perspective in keeping with the concept of quality in e-learning.
A FrAmework to AvoId e-LeArnIng PItFALLs There exist quite a few works that describe a sad paradox in the deployment of e-learning systems. Many of them are in institutions in which a learning platform is in place (more than one in many cases), but only to be used by less than half of the teaching staff. This paradox is especially true in the context of higher education institutions, that is to say, in universities. While it is true that some sectors demand investments in teaching technology, trying to get equipment whose utility has been tested before it is demanded, one can also find other institutional investments for which there is no clear need. If the teaching community sees no need for these resources, it will resist using them. This is probably the cause of the lack of interest one sees towards e-learning in the teaching staff: they do not appreciate any utility in its use in the context of standard teaching, because institutions tend to think that “everybody knows” what to do with these platforms. If bad comes to worse, there is a feeling that teachers will somehow end up knowing how to use them. Now even this is clearly something to worry about; it is by no means the only problem that precludes a proper use of these resources. One could try to synthesize three categories in which one can group other causes.
there is no real Intent in Institutions (“use the Platform or suffer”) If no need has been created before deploying the e-learning platform, it is essential to do it as soon as possible, and to do it properly. In most institutions there is a lack of a real policy as concerns ICT, and more precisely about e-learning. Setting up a virtual campus is a much more radical change than the incorporation of any other technology or means that has been added in a reasonable past. Using this virtual campus means a real shift in the training paradigm. Hence, on must prepare for this change, and for that it is necessary to develop specific policies about e-learning, with a clearly defined strategic model. The proper policy concerning e-learning must be complemented with investments in human resources, in technology, and in methodology. Without this trio of elements, the tool itself is pointless, which is the worst possible outcome in training terms.
xvi
users are Alone Any teacher that decides he or she is going to make use of an online training system, be it out of curiosity or just as a personal challenge, is going to meet a whole range of problems when trying to work things out just by himself. Which methodology should I use? Who will help me to create materials? How is this evaluated? Who will solve technical problems for me? How could I make this platform supply this or that need that I have in the subject I teach? Who will help me tutor if I have about 200 students? Many of these questions find no answer. The teacher, who so far was able to handle his class and managed to fulfill his duties, meets quite a few new tasks for which he has no training, and perhaps this lack is not his or her fault. E-learning necessitates many support services for teaching; without them, the teacher’s job is severely limited, and consequently any formative possibilities are lost.
there is no recognition for the teaching effort needed for any online Action There are two rather common fallacies among those who know little about e-learning. One of them is that e-learning is virtual, that is to say, that it is a subproduct of training and not “real” training like presential teaching. The other is that any activity derived to an e-learning platform frees the teacher from a part of his or her duties, thus reducing the teacher’s dedication. Nothing could be further from the truth, as is well known to those who are dedicated to online teaching. Rather on the contrary, correctly helping a group of students in the context of an e-learning methodology certainly enhances the trainee’s experience, but it tends to increase remarkably the amount of time that the teacher must invest in teaching tasks, in formative training, and in tutorial activity. Regrettably, as a consequence of these prejudices, teachers (and this is especially true in university contexts) are “penalized” when using e-learning as a complement to their teaching activity. If they opt for meeting the challenge, they will get exactly no recognition in academic or economic terms. A large amount of time will have to be dedicated to this “silent” teaching effort, and the rest of the community will take no notice. Since everything happens in a “virtual” context, there will be no visible tracks left, no classroom or lab reservations. Any time dedicated to this job by the teacher is considered “virtual” in all respects. But his time is all too real. This type of situations, which have a most negative impact, should move any organizations that have an interest in online teaching towards the adoption of a strategic policy that will fulfill the requirements of a society that wants and needs information and knowledge in a flexible context. This society, however, is fairly strict as concerns the quality of the product it is going to consume. The context in Europe is expressed quite clearly in the definition of the European Higher Education Space (European Ministers of Education, 1999) which is definitely in favor of a lifelong training, since this will contribute to the improvement of the citizens’ opportunities according to their aspirations and abilities, and consequently enhance their personal, social, and professional development (Cieza, 2006). Any ad hoc solutions for this situation are bound to produce a small and not very positive return on our investments. Any attempts to make serious use of e-learning should be strategic, in such a way that the deployment of an e-learning platform must be one of the visible vertices in a polyedric set of measures. These must constitute a whole strategic plan, which will affect training of course, but also research, services, administration, and even the management and leadership of universities. If this is not done in such a way, one will face the risk of having to redo part of the job if it was initiated in an erratic way through lack of foresight, or one can reach a state of rigidity in the electronic “structure,” thus producing a fragmentation that would be harmful since it would keep apart organs of the institutions that should be perfectly well coordinated. The strategic foundations, which an institution must use when trying to adopt a policy for the deployment of an e-learning structure, can and should be based on the concept of “quality in e-learning.”
xvii
QuALIty In e-LeArnIng Before talking about quality in e-learning, one must define what we exactly mean when we refer to elearning. The application of Web-based tools for learning purposes could be considered a simple definition of e-learning. However, a clearer e-learning definition, including a conceptualization of its modalities, is the best starting point in order to understand the quality reference framework on which we would like to develop this book. Hence, one could define e-learning as: a teaching-to-learning process aimed to obtain a set of skills and competences from students, trying to ensure the highest quality to the whole process, thanks to, mainly, the use of Web-based technologies, a set of sequenced and structured contents based upon pre-defined but flexible strategies, the interaction with the group of students and tutors, the appropriate evaluation procedures, both of learning results and the whole learning process, a collaborative working environment with space-and-time deferred presence, and finally a sum of value added technological services in order to achieve maximum interaction. It is quite common to associate adjectives like “virtual” or “distance” to “learning,” in order to build synonyms for “e-learning.” But it is important to clarify that we are not thinking about virtual learning or distance learning when we refer to e-learning, at least not necessarily. When we try to develop a quality e-learning initiative, the development of skills and knowledge is easier to demonstrate than in a traditional or presential context. So if we consider “virtual” as the opposite term of “real,” e-learning is just real and not virtual learning. But, from a philosophical point of view, virtual is “all that can induce an effect.” If we consider that e-learning is different from many other forms of “learning” because of its active approach, it is clearly “virtual”; that is to say, it has the virtuality to “create” and not only to “assume” knowledge and skills. Concerning distance learning, it’s a common mistake considering elearning as a form of distance learning, and applying its methods and categories to e-learning the results will be really poor. This is because e-learning is not nonpresential like distance learning is. The actors in this process are present, on a different time and a different place, but their presence is verifiable, and they leave certain tracks. So e-learning is more than distance learning, and this is because of the human presence behind the technology, the net, and the computers. One of the main issues in e-learning (and of course in every learning experience, as for any product or service), is the notion of “quality.” This concept, in fact, does not belong exclusively to the universe of industry and economics. The academic world is fairly used to the need to measure certain items in order to determine quality in their learning processes. Quality in e-learning has a twofold significance. First, e-learning is associated in many discussion papers and plans with an increase in the quality of educational opportunities, ensuring that the shift to the information society is more successful. This context is named “quality through e-learning.” Second, there is a separate but associated debate about ways of improving the quality of e-learning itself; this context is called “quality for e-learning” (Ehlers, Goertz, Hildebrandt, & Pawlowski, 2005). Learning outcomes are at the heart of respondents’ understanding of quality in the field of e-learning. When we talk about quality in e-learning, we assume an implicit consensus about the term “quality.” The ISO (ISO 8402, 1986, p. 3.1) defines quality as follows: “The totality of features and characteristics of a product or service that bear on its ability to satisfy stated or implied needs.” In fact, however, “quality” means very different things to most e-learning providers. García-Peñalvo (2006) points out five factors: technology, services, evaluation/accreditation, contents, and human factor (tutoring). Harvey and Green (2000) have suggested the following set of categories: exceptionality, perfection or consistency, fitness for purpose, and adequate return. Ehlers (2004) adds a fifth category, transformation, which describes the increase in competence or ability as a result of the learning process as transformation.
xviii
Matching these ideas, we can define quality in e-learning as: the effective acquisition of a suit of skills, knowledge and competences by students, by means of developing appropriate learning contents given with a sum of efficient Web tools supported via a net of value-added services, whose process—from content developing to the acquisition of competences and the analysis of the whole intervention—is ensured by an exhaustive and personalized evaluation and certification process, and it is monitored by a human team practicing a strong and integral tutorial presence through the whole teaching-to-learning process.
orgAnIzAtIon oF the Book In a few words, the idea behind this book is that a quality e-learning process is much more than technology. Technical issues will have an important place in this book, of course, but the whole question must be considered within other issues such as pedagogical, methodological, tutorial, evaluation, communication, strategic, and so on. Advances in E-Learning: Experiences and Methodologies is addressed to any scholar, technical, academic, or manager that could play a role in the field of e-learning, so the public is extremely heterogeneous. In fact, it is difficult to determine a field of knowing or activity, because any field and any professional role could be potentially interested on e-learning because of its enormous capabilities applicable to institutions, schools, universities, enterprises, associations, and so forth. Above all, it will not give a restricted vision about e-learning, but a multidisciplinary, rich, and complete analysis of the different issues involved, intending to become a reference on e-learning literature because the different issues will not be studied as separate matters, but any question related to e-learning studied in this book will be pointed to get the highest quality in e-learning activities. In fact, the book is organized into nineteen chapters. A brief description of each of the chapters follows: In Chapter I, Ray Webster presents RAPAD, a reflective and participatory methodology for e-learning and lifelong learning. It is a proposal of an adaptive method where students can participate with peers, developers, teachers, and trainers to think about their learning, discuss it, and apply their thoughts to the design and development of Web sites which can serve as Personalized E-Learning Environments (PELE), promoting a deep understanding of learning on a metacognitive and personal level. Chapter II introduces some ideas of the German philosopher Martin Heidegger and how they can be applied to e-learning design. This approach argues that practice must be the center of knowledge creation, which in the case of professional and corporate education is a real work situation. The chapter has been written by one of the most renowned e-learning consultants in the world, Dr. Sergio Vásquez. Following with the philosophical approaches, Chapter III by professors Seoane-Pardo and García-Peñalvo, outlines the background concepts in order to construct a human-centered methodology for online training. This chapter analyzes in a critical way the constructivism paradigm, stating that this framework is not a method and explaining the problems that are derived from this confusion. Chapter IV, by Angelica Rísquez, addresses the issue of mentoring in the online teaching as a qualitatively different concept from its traditional face-to-face version, and how the relationship between mentor and mentee is modified by technology in unique and definitive ways. The chapter introduces a set of best practices on design, implementation, and evaluation of e-mentoring programs. In Chapter V, Dr. Olga Díez deals with the issue of lifelong learning and describes an experience in teacher training for e-learning in the field of adult education. The chapter discusses the balance between mere ICT skills and pedagogical competences. The author argues that the learning design should always
xix
allow that the teachers in training integrate in their work ICT solutions that fit to the didactic objectives, renew teaching and learning methodology, facilitate communication, give place to creativity and allow pupils to learn at their own pace. Chapter 6VI is about institutional and socio-organizational factors that influence the adoption and use of Learning Management Systems in higher education institutions. Ruth Halperin presents a hybrid e-learning case study to explore these factors, where institutional parameters have particular relevance underlining the tensions involved in integrating technological innovation into an established system. Krassie Petrova and Rowena Sinclair focus Chapter VII on understanding how the quality of student learning and the student learning experience could be improved given the pertinent environmental and academic constraints of an e-learning case. The main objective of the chapter is to identify some of the important issues and trends related to the perceived e-learning value. They state that new blended learning and teaching models should emphasize further the alignment of learning with work/life balance. Chapter VIII, by Giovannina Albano and Pier Luigi Ferrari, provides an overview of research on learning processes related to the use of technology and a sketch of constructive and cooperative methods and their feasibility in an e-learning platform in the Mathematics education context. David Camacho et al. describe in Chapter IX both the main issues related with artificial intelligent (AI) techniques and e-learning technologies, and how lifelong learning processes and problems can be represented and managed by using an AI-based approach in order to implement a group-based adaptation based on the actions not of an individual student but of a set of students who have accessed the system along a period of time. Chapter X shows a study applied to the analysis of the utilization of learning Web-based resources in a virtual campus. The authors, Addisson Salazar and Luis Vergara, use this case study to detect of learning styles of the students based on a known educational framework, and useful knowledge of global and specific content on academic performance success and failure. In one of most computationally-oriented chapters of this book, Sergio Gutiérrez and Abelardo Pardo describe, in Chapter XI, the use of swarm-intelligence techniques in the field of e-learning, analyzing several of such applications and expose their strong and weak points. Swarm intelligence is an AI technique inspired by the behavior of social insects. Taking into account that the Internet connects a high number of users with a negligible delay, some of those techniques can be combined with sociology concepts and applied to e-learning. Chapter XII is devoted to Web 2.0 applied to the e-learning area. Luisa Mª Regueras et al. present how this technology movement can be transferred and applied to the learning process, in terms of methodologies and tools, and taking into account different scenarios and roles in order to emphasize the collaborative way of learning. As an example of the ideas expressed in the chapter before, in Chapter XIII Elena Verdú et al. discuss about competitive and collaborative learning; they analyze how adequate the different strategies are for different individual learning styles, all of them in an active learning context. The ideas are supported by a case study and an active learning system. Chapter XIV presents a report system plug-in for Moodle developed by Clay Formación Internacional Team. It presents the possibility of doing adaptations on a LMS depending on the necessities of an institution. This is an interesting example of how combine the Open Software ideas into a enterprise context. Nuria Hernández analyzes in Chapter XV evaluation as a strategic instrument to promote active and significant learning. Inside of this strategy, the author argues that an electronic portfolio as assessment element will be able to help the student to generate suitable learning.
xx
Chapter XVI presents a very valuable state of art of the formative assessment in e-learning-based systems. Izaskun Ibabe and Joana Jauregizar describe the four most used strategies for online formative assessment: online adaptive assessment, online self-assessment, online collaborative assessment, and portfolio. Through a case study, they argue that all type of assessment needs to be conceptualized as “assessment for learning.” In Chapter XVII, which is related to the previous one, Dr. Mª José Rodríguez-Conde analyzes the assessment term, applied over all the elements which constitute the environment of formation (evaluation), and also particularizing in the assessment of the learning process, developed in the frame of e-learning. The most interesting part of this chapter presents a high valuable discussion about institutional strategies for the incorporation of this e-assessment methodology in higher educational organizations. Dr. Evelyn Gullett discusses in Chapter XVIII the application of an e-quality assessment matrix (e-QAM) as part of a quality assessment model that promotes continuous improvement of the e-learning environment. This model must be a reference tool for organizations to achieve a base standard of consistent quality that is essential for program accreditation and satisfaction. In the last chapter, Célio Gonçalo Marques and João Noivo introduce a method to measure the quality of e-learning courses. They present a new quality reference model, e-Qual model, which is derived from the analysis of reference frameworks developed through international projects. E-Qual is very flexible in order to adapt itself to the evaluator’s perspective (learners, producers, and distributors) and to the contents and contexts perspective.
reFerences Cieza, J. A. (2006). E-learning factors. A lifelong learning challenge inside the European space for higher education framework. In F. J. García, J. Lozano & F. Lamamie de Clairac (Eds.), Virtual campus 2006 post-proceedings. Selected and extended Papers–VC’2006, CEUR Workshop Proceedings. Retrieved November 1, 2007, from http://CEUR-WS.org/Vol-186/ Ehlers, U. -D. (2004). Qualität im e-learning aus lernersicht: Grundlagen, empirie und modellkonzeption subjektiver qualität. Wiesbaden: VS Verlag. Ehlers, U. -D., Goertz, L., Hildebrandt, B., & Pawlowski, J. M. (2005). Quality in e-learning. Use and dissemination of quality approaches in European e-learning. A study by the European Quality Observatory. Cedefop Panorama series, 116. Luxembourg: Office for Official Publications of the European Communities. European Ministers of Education. (1999, June 19). The European higher education area - Bologna declaration, Bologna. García-Peñalvo, F. J. (2006). Introducción al eLearning. In F. J. García-Peñalvo et al. (Eds.), Profesiones emergentes: Especialista en eLearning. Salamanca, Spain: Clay Formación Internacional. García-Peñalvo, F. J., & López-Eire, A. (2007). Successful e-learning case studies in Spanish University. Journal of Cases on Information Technology (JCIT), 9(2), 1-3. Harvey, L., & Green, D. (2000) Qualität definieren: fünf unterschiedliche ansätze. Zeitschrift für Pädagogik: Qualität und Qualitätssicherung im Bildungsbereich: Schule, Sozialpädagogik, Hochschule, 41, 17-39.
xxi
ISO. (1986). Quality–Vocabulary. ISO 8402. Geneva: International Organization for Standardization. Martínez, J. (2006). E-learning en blanco y negro. Learning Review, 14. Rosenberg, M. J. (2006). Beyond e-learning. Approaches and technologies to enhance organizational knowledge, learning, and performance. San Francisco, CA: Pfeiffer.
xxii
Acknowledgment
It is imperative to begin these few lines with my special thanks to the authors and reviewers of every chapter, whose labour and dedication where so remarkeable as to make it easy to complete this work. I am equally grateful to those who helped with the blind review process, without whom it would be impossible to achieve a book of this caliber. But my special thanks in reviewing go to my colleagues of the Researching Group on InterAction and eLearning (GRIAL) who gave their time and effort to provide constructive and comprehensive feedback extremely useful to finish this work. They include Valentina Zangrando and Antonio Seoane, who helped me with the final revision of the entire book. I would also like to thank the editor Jessica Thompson for her efficiency and generosity in working with us, and the publishing team of IGI Global for their competence and expertise. Finally, I express my gratitude to the Education and Science Ministry of Spain, National Program in Technologies and Services for the Information Society, since this book has been developed inside the KEOPS research project context (Ref.: TSI2005-00960) financed by the Govern of Spain. Francisco José García Peñalvo University of Salamanca
Chapter I
RAPAD:
A Reflective and Participatory Methodology for E-learning and Lifelong Learning Ray Webster Murdoch University, Australia
ABstrAct This chapter introduces RAPAD, a reflective and participatory methodology for e-learning and lifelong learning. It argues that by engaging in a reflective and participatory design process for a personalized e-learning environment, individual students can attain a conceptual change in understanding the learning and e-learning process, especially their own. Students use a framework provided by the concept of a personal cognitive or learning profile and the design and development of a personalized e-learning environment (PELE) to engage with key aspects of their learning. This results in Flexible Student Alignment, a process by which students are better able to match their learning and e-learning characteristics and requirements to the practices, resources, and structures of universities in the emerging knowledge society. The use of Web-based technologies and personal reflection ensure that RAPAD is well-placed to be an adaptive methodology which continues to enhance the process of lifelong learning.
IntroductIon This chapter describes a reflective and participatory methodology for the design of personalized virtual e-learning environments—reflective and participatory approach to design (RAPAD) (Webster, 2005). With RAPAD, students and users reflect and participate with peers, developers, teachers, and trainers to think about their learn-
ing, discuss it, and apply their thoughts to the design and development of Web sites which can serve as personalized e-learning environments (PELE). This process, RAPAD, is a methodology for enhancing e-learning and lifelong learning because it promotes a deep understanding of learning on a metacognitive and personal level. The metacognitive and self-regulatory improvements brought about by using RAPAD causes a
Copyright © 2008, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
RAPAD
conceptual shift in the understanding and application of each individual’s attitudes to personalized learning. Enabling this conceptual shift is seen as a necessary prerequisite for improving the quality of student learning (Vermetten, Vermunt, & Lodewijks, 2002). The quality of student learning is of central importance in the transition to a knowledge-based economy. Because of the strong links between education, training, and the needs of knowledge workers in industry and commerce, participatory methodologies like RAPAD can become very important mechanisms for developing e-learners and lifelong learners for the Knowledge Society. As a reflective and participatory methodology, RAPAD provides a framework and set of procedures to enable each individual to understand his or her learning preferences and thus enhances e-learning and lifelong learning. Two core mechanisms are used within RAPAD to strengthen the reflective and participatory process. These are the cognitive profile and the personalised e-learning environment (PELE). Using the concept of a cognitive profile enables the personalisation of the PELE by structured reflection on individual learning related characteristics. The cognitive profile, as used here, consists of measures of each student’s cognitive style, learning style, and personality type. This reflects the assertion that it is the combination of these three measures which best reflects each “individual’s combination of aptitude/trait strengths and weaknesses” in terms of learning (Jonassen & Grabowski, 1993, pxii). Students undertake a series of profile associated tests at the start of the exercise and are given their results. They then discuss, reflect, and comment on those results before using them in designing their PELE. Designing and developing the PELE with specific reference to the personal learning profile gives both a context and a focus to the development of the e-learning support system that the PELE represents. The structure of the chapter is as follows. Several definitions and key terms, as used in this
chapter, are introduced. This is followed by a background section which discusses the need for new and personalised approaches for supporting elearning. The next sections consider the changing conceptions of learning, discuss the complexity of learning, and, in order to provide a coherent overview of the work, offer a systems perspective of the student, methodology, and PELE as learning system. The concept of Flexible Student Alignment (Webster, 2005), which is partially enabled by taking a systems perspective, is then introduced before the need for human-centred e-learning systems design and participatory design (as an example of a human-centred design methodology) are outlined. The development of RAPAD as a participatory methodology is then summarized. This is followed by a broad description of the research phases and empirical work which comprised the development of RAPAD as an e-learning methodology. Future trends are then suggested before conclusions are drawn and the chapter is summarized.
Definitions The reflective and participatory approach to design is an iterative process in which key elements include the student as a codesigner in the production of a system or PELE. The method is used as a mechanism to help each student acquire the self-regulatory skills associated with autonomous learning. The methodology provides a conceptual framework of structure and process for the student to function within. The next section briefly introduces some key terms. The terms are defined with reference to RAPAD and their use in that context is explained.
Reflective The term “reflective” as used here derives from Schön’s (1983, 1991) use of the term in both the phrase and the sense of “a reflective practitioner.” Schön considered that many professionals
RAPAD
in fields such as law, engineering, architecture, and medicine, developed and consolidated their learning by reflecting on their practice and also reflecting in the performance of their practice. Schön (1987) then applied the concept of learning by doing and continuing to learn through reflection and problem solving to education. It is considered that students need to develop as reflective practitioners with respect to their own learning. The purpose is to help them to function and participate effectively in a systemically different system of higher education.
design The term “design” comes before “participatory” because the latter term is a subset of the concepts encapsulated within the term “design.” Design is used in the sense of Systems Analysis and Design which, at a conceptual level, derives from systems theory (Checkland & Holwell, 1998) and uses associated concepts to understand the activity of all types of systems. The term is most often used in the context of information systems design and development and has becomes almost synonymous with information and communications technology based systems design. It is used in a broader manner in this work, as the research considers the design of learning environments from a systems perspective. The systemic perspective can embrace and contain the systematic and analytic methodology associated with much systems analysis and design, while the reverse is rarely true.
Participatory “Participatory design” is a phrase used in the information and technology design fields to indicate the very close and full participation of the system users in the process of the design and development (and testing and implementation and review) of the system in question (Preece, Rogers, & Sharp, 2002). It differs from other user-
centred approaches in that the user is a partner in the development process rather than the client of it, a key difference in terms of involvement. As the name suggests, a participatory approach is introduced to encourage deep user involvement in the design process (Preece et al., 2002). The result is that both the system designer (or design team) and the user(s) can benefit from and learn from each other.
Cognitive Profiles The reflective and participatory approach was operationalized by the use of student cognitive profiles, which were applied to the design of the PELE. This method of profiling gave the students a framework in which to structure and apply their reflection. A cognitive profile is made up of three core elements: measures of the student’s cognitive style, learning style, and personality type (Jonassen & Grabowski, 1993). The cognitive style measure chosen was Riding’s Cognitive Styles Analysis—CSA, (Riding, 1991, 2001). This comprises a computer-based test which measures personal preferences for representing and processing information. The learning styles instrument was the Approaches to Study Skills Inventory for Students—ASSIST, (Tait et al., 1998; Entwistle et al., 2000). This instrument measures deep, surface and strategic approaches to learning, with each approach containing several sub-categories. The personality instrument was the Myers-Briggs Type Indicator—MBTI, (Myers et al., 1999) a widely used instrument for measuring personality type. As with most instruments in this area, there is continuing debate concerning reliability and validity (Bayne, 1995; Nowak, 1997; Peterson et al., 2003; Coffield et al., 2004). Although care was taken to choose well tried, tested and widely used instruments, this was considered less critical for the purposes of this project than for experimental research designs as the measures were being used as a framework for reflection and design rather than for the purposes of category labeling. The
RAPAD
students could disagree with the results but had to say why, explain which category and learning traits they considered correct and use these new criteria as part of the reflective design process. The students took the three tests at the start of the process. After some discussion of the ideas and concepts involved, they were given the test results and asked to write about them in reflective journals. The involvement of the students in reflecting on their own responses and then applying them to the learning environment design formed a central part of the application of RAPAD.
Learning Environment A general definition of a learning environment was provided by Wilson, who suggested that a learning environment is “a place where people can draw upon resources to make sense out of things and construct meaningful solutions to problems” (Wilson, 1996, p.3). A more specific definition of the term “learning environment,” which was provided within the context of the management of change in higher education in general and universities in particular, is “a learning environment is a community with its own culture and values providing a variety of learnplaces that support student learning” (Ford et al., 1996, p.146). This second definition was adapted to describe the concept of the personalised e-learning environment used in this research in the following way: a personalised e-learning environment is a Web-based virtual environment reflecting the culture and values of the individual student and providing links to a variety of possible learning communities and a learnplace that supports autonomous student learning.
Personalized e-Learning Environment (PELE) The practical definition of the e-learning environment as given to the students for design purposes was in the following form:
A personalised e- learning environment (PELE) is a system which is designed to support the information retrieval, information handling, and learning support needs of the student. In its entirety, the PELE system which is developed as a mobile (laptop, server, organiser, phone, flash drive) based Web site to replicate as many of the Learning Resource Centre functions as possible. These functions can include: Learning Support, Study Skills, Media Services, IT Support (Administrative), IT Support (Academic), Learning Resources, and Career Services. The PELE should allow the student to store, retrieve, and manipulate information from internal sources (hard drives, digital documents, and images, and so forth) and external sources (Internet, WWW, etc.). The use of the three measures plus an iterative process of discussion, design, and feedback provided a more holistic and systemic methodology for the design and development of the PELE. The context and overview of the Reflective and Participatory Approach to Design is shown in Figure 1.
BAckground A personalized e-learning environment (PELE) is a virtual learning environment which acts as an interface to learning resources as well as to other learning systems and environments. The process of developing the PELE is seen as a way of enabling students to develop as autonomous learners in that it helps them to think about their own learning in a structured manner. This is considered to be a prerequisite for students in a system of mass higher education in which the concept of e-learning as the basis for active and resource-based learning is often promoted but not explained. The associated personal activities—how to function at an individual level and as an active e-learner in a resource based e-learning environment—can remain something of a
RAPAD
mystery to the new student. By using a reflective and participatory approach to design, the interface concept can be extended to encourage students to contemplate how they interface with learning materials, learning processes, and learning environments, including the university and its associated subsystems. As we move towards the processes and practices of the Knowledge Society, using RAPAD and developing a PELE also helps develop the reflective, metacognitive, and self-regulatory skills necessary for Lifelong Learning.
the need for new Approaches for supporting e-Learning Recent changes in higher education have produced a set of circumstances that need a new approach to supporting and enabling student learning. The development methodologies for e-learning systems, whether they be human centred or techno-centred, will play a central role the new approaches which emerge. Although learning remains central to all students’ educational ex-
perience, a large number of factors have changed dramatically from even 10 and certainly 20 years ago; especially in the OECD countries (many of these changes occurred earlier in the USA, although the Web related changes are similar for all). The number of students has increased significantly. The number of academic staff has stayed largely the same resulting in increased staff-student ratios and the need for online and distributed learning resources (DfES, 2003). The backgrounds of the students have become more varied, with some universities having more than 50% mature students (Laurillard, 2002). A continuing problem with the current scenario in higher education is that although there may have been a much expanded student intake, with the move to a mass system, many of the processes and practices in use are those developed for an instruction-based elite system and the introduction of e-learning systems and activities. While many of the traditional procedures and systems will remain useful and relevant, we have to ensure that those in use are suitable for functioning effectively within the resources and constraints of a
Figure 1. Overview: reflective and participatory approach to design (Webster, 2005)
RAPAD
mass system. Some central processes and practices (forms of assessment, tutorials which functioned effectively with 8 participants but struggle with 16 to 20, personal tutoring) are increasingly underresourced and under strain. In addition, a perception has developed, especially amongst higher education managers, that the provision of information and communication technologies will, by themselves, provide useful and cost saving solutions. This approach often misses the point that the learning systems we are concerned with are social systems of which technology is only one aspect, often acting simply as an information carrier or interaction enabler. The central and most important component remains the student. Laurillard (2002, p.145) quotes Carol Twigg’s suggestion that an increased understanding of how individuals learn has its corollary in that “increased individualization of the learning process is the way to respond to the diverse learning styles brought by our students” (Twigg, 1994, p.1). Technology and e-learning systems offer innovative ways of reconceptualising our approaches to learning and teaching delivery systems, but learning itself remains is the central and human component of any e-learning system. One way of rethinking learning and e-learning support can be to develop the metacognitive skills of the individual student by using individual cognitive profiles to help construct personal interfaces for interacting with e-learning environments. The need for students to become more actively involved in the management of their own learning implies an associated need for each student to be more aware of and to increasingly draw on his or her personal resources, including the components of his or her cognitive profile.
the need for Personalized E-Learning Environments The focus for this work comes from a combination of observed personal experience of learning
and teaching with e-learning environments plus the drive towards personalized learning being experienced in OECD countries. This personalization is reflected in the quotations directly below from U.S. educationalists and UK and Australian politicians. The key assertion is that education in general and higher education in particular are moving into an era of personalized learning. Metros and Bennett (2002), echoing Twigg (1994), also go further in identifying the central role of using cognitive profiles to enable this personalization. This is a central element RAPAD. “Personalized learning can become a reality when a learner’s profile, determined by preliminary assessment, is used to structure and sequence the learning components” (USA—Metros & Bennett, 2002). “The key strategy is personalised learning” (Australia—Bishop, 2006). “A mission to realise the full potential of each young person through a system of education increasingly personalised around the needs of each child, with a new concept of lifelong learning” (UK—Blair, 2004). In this scenario, attempts are made to match the learning experience of the student with his or her learning needs on an individual basis. The Web, e-learning methodologies, and their integration as e-learning systems will play a key role in these developments.
changing conceptions of Learning and e-Learning Our conceptions and understandings of learning and the learning process have been steadily changing over the past two or three decades. In this last decade, the pace of change has perhaps increased in response to the central facets of massification impacting more fully and more consistently on university teaching. The second edition (2002) of Laurillard’s influential text Rethinking University Teaching shows subtle changes of emphasis that reflect the shift in focus within the sector. For
RAPAD
example, whereas Laurillard has Marton and Ramsden (1988) listing “teaching strategies” in the first edition (Laurillard, 1993, p. 82), in the second edition they list “implications for the design of a learning session” (Laurillard, 2002, p. 69). The subtitle of the second edition also shows a shift in emphasis from the use of the phrase “educational technology” to “learning technologies.” These changes, while minor in quotations from a given text, represent a more substantial shift in our thinking in the relationship between teaching and learning and, as a subtext, the role of learning technologies in that relationship. A further requirement is presented by the need to make sense of the plethora of terms used to describe different “types” of learning—distance learning, active learning, e-learning, resource based learning, student centred learning, self regulated learning, networked learning. Unless academics and university teachers have a clear appreciation of the form and content of the process that constitutes student learning, it will be difficult for them to make sense of the variety of approaches to learning confronting them in their professional life. However, Laurillard (1999, p. 113) did suggest that “it is difficult to find an academic with a theory of learning. Or even one who thinks it is his job to have one.” This point and related issues were well explored in a paper from the same conference (Banathy, 1999). With reference to systems thinking and change in higher education, this author used a hypothetical conversation between “a subject-matter professor and a systems thinker” (Banathy, 1999, p.133). The paper, while illustrating Laurillard’s point, also provided an accessible systems based commentary and analysis on the differences between learning and instruction focused approaches to higher education. In considering the role of learning technologies in the teaching and learning relationship, Driscoll (2002) asserts that there are four basic tenets that need to be considered when we, as teachers, think about the use of technology to support our
teaching and, by inference, student learning. These tenets are that learning is active, social, reflective, and occurs in a context. This concurs with Goodyear (2001) who considers learning from a cognitive perspective through the lens of Shuell’s (1992) work. In this framework, learning can be conceptualised as passive reception, discovery, knowledge deficit and accrual, guided instruction, with this last form fitting “best with current scientific ideas about learning” (Goodyear, 2001, p.71). Within this model, the significant elements of learning are then formulated as active, individual, cumulative, self-regulated, and goal oriented. A mode of implementation for these approaches is put forward by Simons et al. (2000, p. 9), who suggest that “new instruction should be aiming for the new outcomes of learning through the facilitation of the new learning processes and strategies in which a new balance between guided learning, experiential learning and action learning occurs.” A major consideration of these models and perspectives is that each suggests that the design of systems for learning needs to be a systemic as well as systematic process. The systemic perspective then logically holds for the individual, group, or organisational level and takes the factors into consideration. An example of elements of this systemic and systematic approach is more fully contextualised and presented by Goodyear (2002, p.11). It is a point that has been made quite strongly by several authors in recent times (Ford et al., 1996; Knight, 2001; Trowler, Saunders, & Knight, 2003; Weil, 1999) and has resonance with the writings on both organisational and educational change of Argyris and Schön (1996) and Checkland (1990).
Learning is complex In the case of learning itself, dictionaries often provide a simple definition of the phrase “to learn.” For example, the Shorter Oxford English
RAPAD
Dictionary (3rd ed.) offers: “To get knowledge of (a subject) or skill in (an art, etc.) by study, experience, or teaching” (Onions, 1983, p. 1191). A more problematic issue, referred to by Driscoll’s principles (Driscoll, 2002) is that of understanding how we learn, or in a more complex way, how we move from gathering information about something to gaining an understanding of that information within our own social, affective, and cognitive domains. The roles of others—parents, friends, peer groups and, especially, teachers—are important here. Additionally, and within the framework of this discussion, it is seen as important that the individual student gains an appreciation of how he or she learns or acquires that understanding. When, with reference to professional learning, Trowler and Knight (2000, p. 37) state that “much professional learning is social, provisional, situated, contingent, constructed and cultural in nature,” it follows that this is also true of student learning. It might not be necessary, possible, or even desirable to try to explain all of these aspects to new university entrants, but some knowledge of an individual’s own learning processes and how to use them effectively has to be a useful resource for each student. One reason for this is that the types of learning engaged with in higher education are more complex than those encountered at school (Knight, 2001). This is true both of the types of learning in themselves and the social and organisational setting in which many undergraduates find themselves as they emerge into adulthood. Commenting within the context of considering the process of curriculum-making, Knight states that “it is this complexity that especially distinguishes university study from school study” (Knight, 2001, p. 369). It is now widely accepted that an important part of the learning process is that each of us builds or constructs new knowledge on the basis of the existing knowledge (Goodyear, 2002; Knight & Trowler, 2001; Simons et al., 2000; Vermunt, 1998). This is the “constructivist” paradigm which
is commented on at greater length in another section. Within this context, a further definition emphasises that learning is also an active process and one to which we are well suited: “Learning is a basic, adaptive function of humans. More than any other species, people are designed to be flexible learners and active agents in acquiring knowledge and skills” (Bransford, Brown, & Cocking, 1999, p.45). As with learning, there are many forms and phrases to describe e-learning. If we accept that the “learning” part of e-learning is effectively encapsulated in the above quote, then we consider that Goodyear (2005) provided an extension and clarification of the term “e-learning” which emphasises the learning aspects when he suggested that: The terms e-learning, Web-based learning and online learning now have wide currency in education. I use the term networked learning to mean a distinctive version of these approaches. I define networked learning as: “learning in which ICT is used to promote connections: between one learner and other learners; between learners and tutors; between a learning community and its learning resources.” (Goodyear, 2005)
A systems PersPectIve oF rAPAd And PeLe This section comprises an overview of the systems approach to the problem and how it affected the development of RAPAD as a methodology and PELE as a system. The systems paradigm or systems inquiry is an approach which uses the elements and organisation of systems theory (the core transformation at the conceptual level, hierarchy, system boundary, environment, etc.) as a lens for investigating student learning and e-learning system design in higher education. The approach encourages us to be systemic as well as systematic.
RAPAD
Ontologically, systems philosophy takes a systems view of the world and thus provides a holistic perspective. This holistic perspective allows us to envisage the university as system with the student as learning system (SLS) as subsystem (both with and without the individual e-learning environment (ILE)). The university and SLS can also be conceptualised in terms of their relationships with other systems and subsystems. Systems philosophy provides a process oriented view and the organisation of the relationships and processes between relevant entities is central to the emergence of the properties which help define a given system. In the case of this research, the arrangements and relationships between the student, the PELE subsystems and e-learning support processes and materials help define the emergent system. Different actors will view the system in a range of ways. However, the framework provided by the cognitive profile helps ensure that the viewpoint represented by the student and PELE as SLS is that of the student. Epistemologically the systems approach takes synthesis as both the starting point and objective of systems inquiry. The combination of the sys-
temic and systematic viewpoints allows analysis to be used as a useful tool rather than as an end in itself. Researchers such as Schon (1991), Argyris (2004), Argyris and Schon (1996), Checkland (1981, 2000), Checkland and Hollwell (1998), and Banathy (1996, 1999) have all worked to apply a systems approach and systems concepts to complex social systems including higher education. By viewing the various scenarios systemically and in terms of a hierarchy of related systems and subsystems, an analytical approach can be adopted and used without losing sight of important systems relationships. Systems theory thus provides tools and techniques for organising and understanding complexity. Properties such as hierarchy and emergence allow us to define the student and PELE within the context of the university and related e-learning systems. Systems methodology provides strategies and models for applying systems theory to complex systems and problems. Systems methodology can be used in two related but separate modes. The first is to use it as a way of organising and implementing enquiry about systems. The second is as a framework for making sense of the system from
Figure 2. Student as reflective and participatory system designer for PELE
RAPAD
within any events which might be taking place. In this study, systems methodology was used in both these ways. The models, methods, and strategies were used to define and explore, for example, the concept of the student and PELE as an e-learning system. In addition, the use of systems methodology was, in itself, an iterative and self-reflexive process in which the methodology was a tool that was refined and developed by the process of being used.
The Student as a Reflective and Participatory system designer for PeLe From a systems perspective, the student can be considered to be part of the university conceptualised as a human activity system. This system then contains several related subsystems, each made up of people, processes, and technology. We can also conceptualise the student as being part of an e-learning system and, consequently, as a system, being combined with and interacting with relevant processes (attending, studying, using the library) and technologies (books, television, computers). As with all human activity systems, there can then be several different conceptualisations and viewpoints of the component parts and makeup of the student as learning system (academic, administrator, parent, peer, etc.). However, the perspective which is the most important is that of the student him or herself. As suggested above, we can further consider the student as learning system (SLS) to be part of a larger learning system, the university. The SLS interfaces with many other subsystems which function as e-learning or e-learning support systems. The interfaces between the SLS and these other systems are of crucial importance in the functioning of the student as an active and autonomous learner. From the perspective of a compliant student functioning with the transmission model of elearning, the student can be considered to be
0
suboptimally interfaced with many important systems. The student can be conceptualised as being tightly coupled to subject learning through the provision of prescribed materials and processes, the use of the transmission method of teaching and learning, and a lack of metacognitive awareness and learning autonomy. In terms of other subsystems, the student may be poorly interfaced because of some of these factors plus a lack of process knowledge—for example, a poor knowledge of administrative procedures or of how to access information on those procedures. The student and individual e-learning environment combine as SLS to produce a more effective e-learning system. This new system produces a tight coupling between the student and the PELE. This then allows a loose and flexible coupling with the subjects as e-learning systems and other university e-learning and learning support systems. This is the concept of Flexible Student Alignment (Webster, 2005). By enhancing each student’s metacognitive skills and self-regulatory awareness, the locus of control is shifted towards the student. The more autonomous system that emerges is better able to handle the demands of active and independent e-learning. Figure 3 presents an overview of this process. The PELE is necessarily an open system designed with the student in order to help to student to interact with all facets of his or her environment in order to support and sustain the learning process. The initial interface of a prototype Individual ELearning Environment is shown in Figure 3. It is built around the personal learning activities of the student and also allows for more personal elements to be included. The student’s cognitive style (Analytic-Imager) in this case impacts both the design (e.g., structure) and content (e.g., balance of text and graphics) of the page. The student’s learning styles affect the content (e.g., time management, learning organisation procedures, and resources, resource links for identified areas of study weakness). Personality type impacts the look and feel of the learning environment but also the
RAPAD
information related processes via the instrument’s information based dimensions.
Flexible student Alignment (FsA) Flexible student alignment (FSA) is produced by the student and PELE subsystems forming an adaptive system for interfacing with the subsystems of the university e-learning system. Biggs (1996, 1999, 2003) proposed the use of the concept of “constructive alignment” and sees the process as “aligning curriculum objectives, teaching/learning activities and assessment tasks” (Biggs, 1999, p. 65). This concept has become a generally accepted approach to viewing the teaching-learning process. It takes a constructivist perspective on learning and aims to align objectives expressing the types of understanding required of the student with assessment tasks which help us to see that those objectives have been met. The teaching context and the assessment tasks also help students to undertake suitable e-learning activities and the assessments clearly articulate what the students need to do. This is a useful and productive approach. It does, however, consider alignment largely from the teacher and teaching enabled learning perspective. We can also adapt this to the idea of developing e-learning systems and environments—that is, most current systems and environments are developed from the organisation’s perspective. However, if we recognise the need for and advantages of the personalization of learning and e-learning for lifelong learning in the knowledge society, then we need adaptive systems and environments. The RAPAD methodology allows us to develop personalized e-learning systems and environments to promote Flexible Student Alignment via the involvement of the student in the design and development process. McCune (2003) recognised this when reporting extensive work on university teaching-learning environments (Entwistle, 2003; Entwistle, McCune, & Hounsell, 2002). The team had modified
their view of constructive alignment to consider the concept of “alignment to students” (McCune, 2003, p. 24). She also suggested that learning measures and questionnaires had their limitations in providing descriptions of the complexity of alignment in any given situation and stated that: While a teaching-environment may seem well aligned in terms, for example, of the correspondence between the forms of learning encouraged by the different aspects of the teaching and assessment, this does not mean that this environment will be equally suitable for all of the students involved. (McCune, 2003, p. 24) We can paraphrase this to say that: while the e-learning systems and environments may seem well aligned in terms of, for example, the correspondence between the forms of e-learning required for the overall efficient functioning of their university, this does not mean that this systems and environments will be equally suitable for all the students involved. What is need is a series of personalized subsystems which can interface with the university e-learning systems and environments with the software processes, information, and learning objects arranged with and by the individual student for each student’s e-learning purposes. The work reported here focused on the learner and consequently considered alignment from the student perspective as well. There is a close fit and tight-coupling between the student and the PELE as e-learning support system. This and the facility for loose coupling and flexibility between the PELE and the university as an e-learning environment enables students to better align themselves with the various teaching-learning environments they encounter. Flexible Student Alignment allows the student to use the SLS-PELE system to exercise individual flexible alignment with respect to the multiplicity of teaching-learning environments and other university e-learning support systems encountered.
RAPAD
technology or human-centred e-Learning systems design? Many of the changes in education and society in recent years have been technology driven. In most OECD countries (excluding the USA, where a mass or even universal system of higher education has long been in place) there has also been a shift from and elite to a mass system of higher education (Trow, 1973). This shift has meant an increase in participation rates from 1015% to 30-40% of the 18-21 age group alongside wider participation from the population in general (DfES, 2003). This combination of changes (and reductions in student per capita funding) has meant that new methods of teaching and learning have become necessary. Technology is seen as a major enabler, but the learning is still done by the student, aided by good teaching. This means we need student-centred learning systems rather than technology-centred systems. The changes have been placed in a broad context above and will be focused on at the individual level with reference to learning and to organisations in general and universities in particular. The user-centred design perspective and systems approach adopted is set within a systems theory framework and much of the theoretical thrust comes from an integration of the ideas of Donald Schön (1971, 1983, 1987, 1991) and Peter Checkland and co-workers (Checkland, 1981, 2000; Checkland & Holwell, 1998; Checkland & Scholes, 1990). Schön and Checkland were concerned with change in society and organisations. Schön is perhaps most closely identified with education and learning; Checkland with organisational change and information systems. Checkland acknowledges the strong links between the central theses of the two authors (Schön and The Reflective Practitioner, Checkland and Soft Systems Methodology) in the second of his major texts, Soft Systems Methodology in Action (Checkland & Scholes, 1990). In the final chapter, entitled Gathering and Learning the Lessons, Checkland comments that “this chapter
is intended to demonstrate an acute case of the kind of reflection which Schön (1983) advocates in ‘The Reflective Practitioner’” (Checkland & Scholes, 1990, p. 276). The development of RAPAD then draws on the theoretical and applied work of both men— separately and together. Separately because the individual contributions included Schön’s “The Reflective Practitioner: and Checkland’s “Soft Systems Methodology.” Together, in that they both draw extensively on systems theory and Vickers’ concept of “appreciative systems” to help gain an understanding of the operations of both individuals and organisations. This is the basis of Checkland’s “Human Activity Systems” (Checkland, 1981, 2000). The learning system produced by the integration of the students, RAPAD, PELE, and supporting technologies is considered to be an example of such a system.
Information systems methodologies and user-centred and Participatory design In the development of RAPAD, several information systems methodologies were drawn on at different times. These include Checkland’s Soft Systems Methodology, Vora’s Human Factors Methodology for developing Web sites (1998) and the Human Factors for Information Technology methodology and tool kit, HUFIT (HUSAT, 1990), which was used for the interface design guidance. There are an enormous number of methodologies for the development of information systems. Most, fundamentally, are products in the market place so each has its own tools and techniques, all of which are claimed to be superior to all the others for doing essentially the same things—conducting the activities of the systems development life cycle. Some authors (Avison & Fitzgerald, 2003; Avison & Wood-Harper, 1990; El Louadi, Galletta, & Sampler, 1998) have suggested using a “contingency approach” to system development. This allows for the selection of different sets of
RAPAD
methods and techniques according to criteria such as the complexity of the system under development, the role of the user in the system and the expertise of the system developer. RAPAD can also be considered to be a contingency methodology drawing, as it does, on a range of tools and techniques which can be adapted for a variety of circumstances. This flexibility can be useful in dealing with complex scenarios where an innovative approach might be useful. This is often the case in higher education where there are additional reasons for complexity. As well as the different cultural and social norms encountered learning support systems have to have sound pedagogic aims, objectives, and achievements. Consequently, it can be argued that the implementation of such systems can be more difficult than “normal” business information systems. Participatory design was pioneered in Scandinavia in the 1960s and 1970s (Preece et al., 2002). As its name suggests, it is designed to encourage user involvement in the design process and, along with contextual design, is one of the user-centred approaches to interaction design. Whereas contextual design aims to use an ethnographic approach to help the designer to understand the user in his or her social, work and cultural context, participatory design encourages the active involvement of the user in the design process. We can consider the similarities between contextual design and participatory design. Contextual design has seven activities: contextual inquiry, work modeling, consolidation, work redesign, user environment design, mockup, and test with customers (Preece et al., 2002, p. 296). One form of participatory design, as used here, is to broadly follow these activities, but to ensure that the user (or learner in this case) is dynamically and iteratively involved in the full design and development process. This involvement is not always an easy task to ensure, although the participation of students studying a human computer interaction unit in the first and main iteration of this study greatly facilitated the process.
The participatory approach in this study was operationalized by the use of cognitive profiles and the involvement of students in reflecting on their own responses and then applying them to learning environment design. Using the three measures plus an iterative process of discussion, design, and feedback gave a more holistic and systemic approach to the design of the PELEs. In the information systems arena, there is a central statement indicating that you cannot design a better or improved system without fully understanding how the current system works—and no one understands the day to day working of a system like the users. As with many well-worn sayings, it is uttered frequently but followed rarely. Giving students the relatively comprehensive information concerning their approaches to learning and their information processing preferences (with reference to the layout and structure of learning materials and, by inference, interfaces) allows them to reflect and comment on both the accuracy of the measures and their applicability to the tasks in hand—including thoughts on how and why they learn. The use of the additional learning style and personality elements of the cognitive profile also allows comparison between the measures and an extension of the individual differences being considered
Why Use a Reflective and Participatory methodology? The overall process for the individual student is one of reflecting on the elements of a personal cognitive profile and then, after discussion and consideration, applying the results of those reflections to the development of a Web technologybased personalised e-learning environment. This approach taken has several key features that contribute to its effectiveness. These include the following: •
Participation in the process helps students to develop metacognitive awareness and
RAPAD
Figure 3. RAPAD provides the guiding methodology but the cognitive profile and PELE are key components to help reconceptualize learning and e-learning
RAPAD: The methodology provides the overall process and framework
•
•
•
•
Cognitive Profile:
PELE:
Enables learner focused reflection on learning characteristics
Provides design and development context and focus
self-regulatory skills and to explore their attitudes to learning and e-learning in a manner which promotes Lifelong Learning. Students produce a personalised Web site or Personalized E-Learning Environment (PELE) which provides a personalized access to learning materials and support systems. The student is a major contributor to and participator in the design and development process, but it is not assumed that the student can do this alone—the instructional designer and teacher have key roles in facilitating the process. A framework is provided that affords both a structure to work within and a process to follow. Participation in the process helps students to learn about user-centred, learned-centred, and participatory approaches to technology based e-learning environment design
•
•
As a product of the process, students get a resource which works in several ways and on several levels—an information organiser, a e-learnplace, a virtual/physical interface, a cognitive interface, and an organisational interface. The design process helps give participants a better understanding of students learning and e-learning systems design.
student engagement with rAPAd In terms of student engagement with RAPAD and the process of reconceptualizing their understanding of personalized learning, the following are key steps in the application of the methodology (several of these tasks are performed iteratively or in parallel over the life cycle of the process): 1.
Continuous reflection and comment on all aspects of the process via mechanisms such as discussion, reflective journals, tutorial and assessment tasks, and learning related design task.
RAPAD
2.
An introduction to learning and the possible variations in and impact of cognitive styles, learning styles, and learning preferences on the learning process 3. Taking the cognitive profile tests, considering personal results (and being allowed to disagree with them—with the proviso of explaining why), discussing the results and commenting on them within the context of current individual conceptions of personal learning. 4. Producing a basic learning/personal Web site as part of the first assessment task (along with a written version of the previous activity). 5. Engagement with online learning resources from a variety of sources to consider personal preferences for learning tasks and activities (structure and form of educational materials, doing assignments, individual and collaborative learning, information retention, revising, etc.). 6. Doing a series of tutorial-based profile and design related tasks and producing an initial design document and series of draft screens for the PELE (second assessment task). 7. Discussing tutor feedback on the design document in group and individual scenarios. 8. Developing a series of personalised learning strategies for the degree course, the current year, a semester, a unit, an assignment, and considering how they might be integrated into the PELE. These strategies are seen as flexible and dynamic, to be adjusted according to varying constraints. 9. Developing, documenting (i.e., explaining the design with reference to one’s personal learning profile as part of the final assessment task), presenting, and receiving feedback on the actual Personalized E-Learning Environment. 10. Reflecting on the overall process, changing personal conceptions of individual learning, and integrating the new learning related
knowledge and PELE into all learning activities. A version of the above scenario is presented in Table 1 as implemented for the Learning at University course. To summarize, a reflective and participatory approach to design is a developmental methodology which encourages reflection within the context of a participatory approach to design. In this case it is reflection by students on aspects of their own learning and participation in the process of the design and development of personalised e-learning environments. It is not assumed that students can easily or naturally contribute to the design and development process, so the concept of the cognitive profile has been introduced to help the process. A cognitive profile is considered to consist of measures of an individual’s cognitive style, learning style, and personality type. In terms of the design of a personalised e-learning environment, the term “reflective” is used as in Schön’s phrase “the reflective practitioner” (Schön, 1983). Participatory design is an approach to design which is not only user-centred (or learner-centred), but actively involves the user (student) in the design process. This is especially important where there is a large element of interaction between the user and the system being designed. One mechanism for doing this is student or user involvement in the design process, that is, a form of participatory design where students can draw on and develop their knowledge and understanding of how they learn within a framework and discourse provided by academic staff, university teachers, and student peers.
the deveLoPment oF rAPAd There were four main phases in the development of RAPAD:
RAPAD
1. 2. 3.
4.
The initial development and formulation of ideas from observed teaching practice A structured research study with Level 3 Human Computer Interaction students The development and reformulation of ideas from 1 and 2 with post-graduate conversion students taking several iterations of an Information Systems Development course The fourth phase saw RAPAD developed, restructured for less technologically experienced students, and used as the major part and focus of a unit entitled “Learning at University” for over 400 pre-university students.
The four main phases are discussed in more detail. As with all dynamic user-centred methodologies, further use brings new developments and refinements.
Phase 1: Initial Formulation of the need for Personalised e-Learning Environments The roots of the development of RAPAD lie in the period following the advent of the World Wide Web in the UK. In the mid 1990s, the Web and associated work-related factors initiated a process of thinking in a more structured manner about emerging themes and problems. The first of these was when I observed a personalized and individual interface (for a partially sighted student) in practice. The second was a concurrent period of major organizational change, not uncommon in modern higher education, which had a negative impact on the student using the personalized interface. Ideas concerning information overload and attempts to enable students to handle the ever-increasing availability of masses of relatively unstructured information were initially developed. Thoughts on interface preferences were further prompted when I supervised the above student taking a written exam with the specially constructed interface. Both of these reflective
episodes occurred against the backdrop of a series of university reorganizations. The reorganizations reflected both social and technological changes in higher education and responses to government policy and suggested a need to rethink student learning support resources at a personal level. I explored some of these ideas in several of the courses I taught over the next few years. These included courses in Human Computer Interaction and Information Systems Design. One newly developed course allowed me to explore more of the cognitive and interface issues emerging with Internet and Web developments—Intelligent Interfaces for the Internet.
Phase 2: Formal research Program A formal research program was designed to explore several of the questions raised by the experiences of the students and myself in the first phase. Curriculum and syllabus changes allowed the redesign of a human computer interaction course to integrate the cognitive and interface issues into the course material and assessment. The stated aim of the research was “to consider how cognitive profiles and a reflective and participatory approach to the design and development of a Web-based learning environment can be used to enable autonomous learning and help students interface with learning processes, materials, and environments” (Webster, 2005, p.3). Three well known and reliable measures, Riding’s Cognitive Styles Analysis (Riding & Rayner, 1998), Entwistle’s Approaches and Study Skills Inventory for Students (Tait, Entwistle, & McCune, 1998), and the Myers-Briggs Type Indicator (Myers, McCaulley, Quenk, & Hammer, 1999) were used to develop the cognitive profile. Computer-based and self-report tests for each of the above measures were administered to a group of 64 students participating in a human computer interaction unit. The results of the tests were made available to the students within one week of each measure being administered. The students were
RAPAD
then asked to reflect on and write about their thoughts on the accuracy and relevance of the measures. Later in the unit, each student had to develop a Web-based personalized e-learning environment (PELE) to a series of e-learning related information resources. This required the application of elements of the cognitive profile to the design and development process. In addition, the students were asked to document the reasons for their design. A range of qualitative and quantitative measures was collected. Student reflections on and responses to the process were considered via the use of a questionnaire, reflective journal and interviews. The comments on the form and content of the Web sites created contained in the documentation were also analysed. Two related metaphors were used to help the students to conceptualise the design of the PELE. The first was that of the Learning Resource Centre (LRC) which is basically a modern university library integrating digital information management and learning support services. One definition used was: The Learning Resource Centre (LRC) is a meeting place for all those who wish to learn. It is the electronic hub of the university and our surrounding communities, linking us to the wider global community. It harnesses new technologies effectively to make learning more adaptable and flexible and more widely available. The LRC is at the centre of the university’s concept of a new learning environment. This environment focuses all our available resources into a teaching and learning strategy based on our understanding of the changing trends in the learning community. The second metaphor was that of the PELE conceptualised as a small personal house which the student could enter and find the personalized learning resources in a set of rooms design to support each specific learning activity. This is a similar, but more personal and individual use of the “house” metaphor to that used in the “Bookhouse” (Pejtersen, 1989).
emergent Issues The initial period of analysis involved using the quantitative data to provide a broad overview of the profiles, responses, and attitudes of the respondents. This was done using the data from each of the cognitive profile measures plus the quantitative data from the survey. However, as would be expected and as suggested by Summerville (1999), the qualitative data provided much greater insights into the individual aspects of e-learning. The student comments and associated qualitative data indicated that engaging in the process of reflecting on the characteristics of one’s own individual cognitive profile did have an effect on the design, development, and content of the individual e-learning environment. Several students queried their prior lack of knowledge of this type of information and commented that they would have preferred to have access to this type of metacognitive information in their high school (or even their university) careers. The participants often had a vague awareness and sketchy understanding of their preferences for information handling, but this remained in an unstructured and unfocused form. The information from their cognitive profile gave them an opportunity to look at this scenario and their preferences in a much more informed and structured manner. This then helped inform the PELE design, from the perspective of an impact on both the structure and form of the environment. Feedback and comments indicated that the CSA and its dimensions provided the most useful data and criteria in terms of developing the “look and feel” of the PELE. The MBTI and ASSIST measures also provided personal learning and information processing preference details and these, while having less impact on the design and construction of the PELE, proved useful with specific reference to the learning process. This then impacted on the PELE in terms of materials accessed to support e-learning preferences.
RAPAD
More important, however, was the manner in which several students commented on broader aspects of their learning experiences and approaches to learning and sometimes identified key incidents which affected their learning development. Others commented on the difficulties they had in adjusting to the different demands of studying at university. They also pointed out that the way they studied in the later parts of their time at university was very different from that adopted in the earlier stages. The manner of this transition appeared to be a random one, often enabled by personal recognition of the problem and self-help or the requested intervention of a lecturer, tutor, or counsellor. Consideration of these and other examples from the difference types of data sources, especially the reflective journals, process documentation, survey comments, and interviews indicated several emergent issues. The first issue to emerge was that the real impact of the cognitive profile measures was in enabling students to reflect on their e-learning habits and processes in a structured manner. The actual scores were less important than providing each student with a set of relevant learner categories and characteristics—whether imager or analytic, extraversion or intuition, “interest in ideas” or “fear of failure”—which could be used to think about their own e-learning experiences. The measures and activities provided a framework and a structured set of processes with which the participants could engage reflectively with important features of the own learning. By critically assessing their own learning needs and applying their assumptions and conclusions to an iterative design process aimed at supporting their personal learning requirements, the students could effectively engage with understanding how they learn at an individual level. This leads to a much needed “conceptual shift” in students understanding of individual (and thus collaborative) learning, the need for which was suggested by Vermetten et al. (2002).
To improve the quality of student learning, instructional measures should address the conceptual domain of learning conceptions and beliefs, of which students have to become aware, and which they have to develop, for example by means of critical reflection. (Vermetten et al., 2002, p. 263) In addition, the responses suggested that both the range of issues students considered as affecting their learning and the manner in which these issues interacted was very wide yet produced an individual mix for each student. This outcome appeared to support the comments of Summerville (1999) and Pillay (1998) on the need for a more process based approach comprising the collection of qualitative data. In addition, social issues such as the intervention of others or the need to make sense of a process which students felt they should understand (how to study effectively at university) yet clearly didn’t, indicated a need for a revision and extension of the methodology and e-learning system.
Phase three: the Introduction of ssm techniques The third phase saw the development and reformulation of ideas from the first two phases with post-graduate conversion students taking several iterations of an information systems development course. A major outcome of this phase was the introduction of specific techniques from Checkland’s Soft Systems Methodology (SSM) (Checkland, 1981, 2000), especially Rich Pictures, in the process and research. The use of Rich Pictures at the student modelling phase was introduced after the initial research and Human Computer Interaction unit iteration. The purpose of its introduction was to see if it could be used to draw out issues relating to the social and interactive elements of learning. It then provided the basis for the “organisational interface” by allowing the student to place him or herself at the centre of the university as organisation in a pictorial format. An example Rich Picture is shown in Figure 4.
RAPAD
Figure 4. A rich picture to help clarify learning support needs and PELE design
The Rich Picture has been described as a “tool for reasoning about work context” (Monk & Howard, 1998) and both the technique and the methodology have been applied to educational scenarios by several authors and practitioners in addition to Checkland, its originator (Briggs, 2003; Kassabova & Trounson, 2000; Patel, 1995). In the systems development unit, the students were asked to reflect and comment on the perceived learning support needs of different types of student (undergraduate, pos-graduate, part-time, fulltime, etc.). After various exercises and discussion in the context of systems development they were asked to produce a Rich Picture of their own situation with respect to learning support resources and systems. Again, the concept of the Learning Resource Centre was used to illustrate and aid this exercise. The examples produced illustrated a variety of individual perspectives of how different students see themselves acting and interacting within the context of the university as e-learning environment––very much a personalized viewpoint. The students then used their Rich Pictures to define the PELE as a system in systems development terms (see below).
Phase Four: the “Learning at University” Unit For the fourth phase of RAPAD’s development saw the methodology developed and restructured for less technologically experienced students. The reflective and participatory model developed in the previous three phases—including the cognitive profile, Rich Pictures, and personalised e-learning environment—was introduced as a pre-university unit that formed the central unit of a university preparation course for more than 400 pre-university students. This development represented an attempt to change the unit or course from one format—study skills based, nonelectronically supported—to a format which is supported by Blackboard Learner Management System. The course syllabus (see Table 1) adapted and used the model presented above to develop the metacognitive and self-regulatory skills of the students about to enter university life and to help enable e-learning and lifelong learning. The differences between 64 predominantly third year students doing a Level 3 unit in “Human Computer Interaction” and more than 400 pre-university
RAPAD
students completing a “Learning at University” unit are significant. However, the exercise proved very successful and by the second iteration of the course the methodology as unit was successfully integrated with Blackboard and the unit assessment practices. This phase also provided additional data and material for consideration in the development of the RAPAD as a methodology to enable students to reconceptualize their learning within e-learning environments.
“Learning at University”: Participatory methodology and unit as a Learning system The research has a practical focus. It was always intended that the research and methodology would provide the basis of several short courses and also longer units if possible. The main target group were first year students and it was hoped that short courses could be provided in the first semester, although it was recognised that the best time could be before commencing university study. A variety of courses, including one half-day, one day, and one week courses were designed for students (and staff in one case), but there were difficulties with fitting into the current diet of study skills courses. However, an opportunity did arise
with the redevelopment of a series of university preparation units to integrate the material into a keystone unit for a university preparation course. This unit, Learning at University, was aimed at helping students to understand their own learning more full and thus to help provide the individual metacognitive skills and strategies necessary for each student to more fully benefit from the other units comprising the course. The methodology and the unit can also be seen as parts of a learning system designed and developed to help the student to develop as an autonomous learner. This is within the context of the different systemic demands of mass higher education (educational and social). In Banathy’s (1999) terms of key entity, key function, and organising the education for learning outcomes (i.e., of the learning system), we have the following: • • •
The key entity is the student The key function is to enable autonomous e-learning How to “organize the education for attaining the best possible learning outcomes?" is achieved via the current and proposed implementation of the “Learning at University” unit
Figure 5. Conceptual model of “Learning at University” as a learning system
0
RAPAD
Again using techniques from the Soft Systems Methodology (Checkland, 1981, 2000), we can define the elements of the systems as shown below. These are followed by a Root Definition, which draws the elements together and a Conceptual Model which presents the minimum subsystems needed to allow the system defined in the Root Definition to function. • •
•
•
• •
Client: The individual student Actors: The individual student, other students, university staff (academics and administrative) Transformation: Identification and satisfaction of the individual students needs to develop as an autonomous e-learner and life long learning Worldview: Autonomous e-learning is a desirable learner attribute in mass higher education and the knowledge society Owner: The university Environment: Social and educational change, university as e-learning environment, peer group, work opportunities
Root Definition The “Learning at University” unit and associated personnel and resources comprise a system, owned by the university and operated by the student and university staff, which identifies and satisfies the individual student’s need for autonomous e-learning capabilities. It operates in an environment enhanced and constrained by the academic and social resources and relationships. In practical terms, this meant the integration of the framework, processes, and activities of RAPAD into the Learning at University unit. The unit was assessed by a series of linked and integrated assessments. The first required the students to produce a simple Web site, following lab material provided, plus an initial cognitive or learning profile based on their results, tutorial discussions, and reflection. The second assessment
focused on producing a design document for a PELE with specific reference to their profiles. Following feedback and further exercises and discussion, the final assessment had several components. These were: to produce a final version of the e-learning environment (PELE), describe and critically analyse the structure of the PELE according to each individual’s cognitive profile, and finally, to orally and visual present and demonstrate their e-learning environments to the respective workshop groups. The integration of the processes and materials into the unit as a set of lectures, tutorials, and workshops is shown in Table 1.
concLusIon This chapter has covered a lot of ground and summarized the development work of much of the past decade. More detailed information, data, and results concerning the formal research program and other developments can be found in several related publications (Webster, 2002, 2003, 2004, 2005). With reference to the information provided by the three measures comprising the cognitive profile, this allowed students to reflect on their learning related characteristics and preferences in a much more structured and informed manner. The outcome of applying the results of this reflection was enhanced metacognitive skills and knowledge. The design of the personalized e-learning environment was an iterative process which both enabled the reflection and was affected by the user profile in terms of structure and content. Many found that the dimensions of the CSA gave them the most directly useful information in terms of the format and content of the PELE and interface design. In contrast, the MBTI and ASSIST measures provided personal e-learning and information preference details which were informative and had greater relevance to the elearning process. These details could then be either integrated into the ‘look and feel’ of the PELE
RAPAD
Table 1. RAPAD implemented as the “Learning at University” unit WEEK
LECTURE
TUTORIAL
1
University learning and you: individual differences and independent learning
Introduction to the unit
2
Student cognitive and learning profiles
University learning and you: individual differences and independent learning
Introduction to Web design for elearning environment development
3
Learning styles and learning strategies
Student cognitive and learning profiles
Web design for e-learning environment development (continued)
4
Cognitive styles and individual preferences in layout and content
Learning styles and learning strategies
Cognitive styles and e-learning environment development
5
Personality types—how your personality can affect your learning
Cognitive styles and individual preferences in layout and content
Learning styles and learning environment development ASSIGNMENT 1 DUE
6
Learning Resource Centres (LRC), Web sites, and Personalized E-Learning Environments (PELE)
Personality types—how your personality can affect your learning
Learning communities and e-learning environment development
7
Online learning and Web usability—tips on good learning environment design
Learning Resource Centres (LRC), Web sites, and Personalized E-Learning Environments (PELE)
Learning strategy features for elearning environment development
8
Rich Pictures and you—seeing yourself in the context of your learning
Online learning and Web usability—tips on good learning environment design
Learning support features for elearning environment development ASSIGNMENT 2 DUE
9
Ideas for your PELE content—the BookHouse and the LearnHouse
Rich Pictures and you—seeing yourself in the context of your learning
PELE development
10
Developing learning strategies— units & assessment
Ideas for your PELE content—the BookHouse and the LearnHouse
PELE development
11
Developing learning strategies— semester, year and course
Developing learning strategies— units & assessment
PELE development
12
Unit review
Presentations
Presentations ASSIGNMENT 3 DUE
13
Feedback sessions
Feedback sessions
Feedback sessions
ASSIST questionnaire
or used more directly to suggest the inclusion of specific e-learning related features. Later iterations of the process and methodology introduced further elements such as the Rich Picture to enable students to consider additional aspects of how they might interface with both online learning environments and the university as e-learning environment. In this way, the methodology and techniques, as applied in the form of
WORKSHOP/LAB. Introduction to the lab. Logging on. Accessing Blackboard
a taught unit, can be seen as an e-learning system which helps the student to produce a series of interfaces for integrating with learning environments at the same time as aiding the development of the student as an autonomous e-learner. There was a considerable difference between developing the methodology with a cohort of 64 second and third year Human Computer Interaction students and a much larger number of students taking a
RAPAD
university preparation course. Each iteration played an important role in the overall development of the methodology and its emergence as a tool which could be used with a broad range of general students as in the university preparation course, Learning at University. With the more general type of course exemplified by the Learning at University course, an initial concern was the apparently large potential difference in the likely skills available to each group in terms of developing the e-learning environment as Web site. There was an emphasis throughout the process that this was not a technical or technology-based process, but one of reflection and design. The form and content of the environment is given far greater emphasis that the technical “bells and whistles” that can be added using technology, no matter how valuable its contribution may be. To this end, the current generation of Web development tools such as FrontPage (and even, at a stretch, Word) and their associated tutorials provide an initial set of pages which can be developed with relative ease. The experience for the student continues to vary enormously in terms of success and frustration, but increasing familiarity with personalizing mobile phone interfaces adds to the confidence of many students. The sense of achievement in having developed a personal e-learning environment and the associated skills is often mentioned as one of the tangible benefits by the students in the feedback survey. The combination of RAPAD and the cognitive profile instruments afford a framework and a set of processes for enabling students to engage with their own and other profile elements and apply them in a reflexive manner to a practical design exercise. It is a complex scenario, but the repeated failure of many quasi experimental attempts to uncover significant relationships between learning measures and learning material presentation (or interface design) suggested a need for a more sophisticated approach to e-learning systems design. Several major studies have concluded
that there is a need to consider the process as well as the outcomes and that the qualitative data provided by student comments are the most useful sources of explanatory data. Systems theory and a systems approach enabled this and helped the concept of flexible student alignment to emerge with the production of adaptive personalized elearning environments. Flexible student alignment focuses on the learner and considers alignment from the student perspective. As suggested above, a close fit and tight-coupling between the student and the PELE as e-learning support system plus the facility for loose coupling and flexibility between the PELE and the university as e-learning environment enables students to better align themselves with the different teaching-learning environments encountered. In this way, using RAPAD to enable flexible student alignment allows the student to exercise individual flexible alignment. This is an important characteristic when considering the many and varied teaching-learning environments and other university e-learning support systems likely to be encountered by each student. The concept of process reengineering in the information systems field draws on the idea that developments in new information and communications technologies allow us to do many things in fundamentally different ways than previously. Instead of using the technology just to further improve how something is done, reengineering suggests we look for ways of reconceptualising how things are done. The use of an iterative, participatory process for effective technology design is part of this reconceptualisation. The student becomes a central part of the technology design process, whether as specialist (e.g., HCI) student or, with more help, pre- or first year university student. In doing so, each individual actively engages with fundamental aspects of his or her learning in ways that produce a valuable e-learning environment plus improved metacognitive and self-regulatory characteristics. The use of RAPAD produces a PELE as an effective
RAPAD
e-learning support system and the student and e-learning environment combine to form an efficient learning support system for e-learning and lifelong learning. This chapter has presented the background, content, and empirical use of the RAPAD methodology. Definitions and key terms were provided and followed by a section which discussed the need for new and personalised approaches for supporting e-learning. The changing conceptions of learning and the complexity of learning were considered. In order to provide a coherent overview of the work, a systems perspective of the student, methodology, and PELE as a learning system was presented. The concept of Flexible Student Alignment was then introduced before the need for human-centred e-learning systems design and participatory design was outlined. The development of RAPAD as a participatory methodology was then summarized. This was followed by a broad description of the research phases and empirical work which comprised the development of RAPAD as an e-learning methodology. Future trends were then suggested before concluding points were made.
Future reseArch dIrectIons In terms of future research directions developments, several prospects exist to develop RAPAD and take the personalized e-learning environment forward. These include developing advanced adaptive virtual environments. The enormous success and developments in alternative digital environments such as Second Life (http://secondlife.com) suggest that this is possible and likely. Developing the skills of learning and gaming and integrating them with mobile virtual environments means that e-learning environments can become more personalised, powerful, and accessible. Other developments include matching its form and content to the additional cognitive preferences of individual students. Developments in auditory and visual digital data offer exciting
opportunities to personalize the environments in more effective ways. Software agents, part of an earlier iteration of the work, have developed and become more mainstream. Their potential for the gathering, filtering, and selection of relevant learning information and materials has been enhanced by their increased use for these purposes in the business arena. The use of XML (eXtensible Markup Language) will enable software agents to better match the content of documents to the cognitive preferences of the individual student. All of these examples represent the potential for research and development in fertile areas.
Cognitive, Virtual, and organisational Interfaces Subsequent work has suggested that students can use personal cognitive profile knowledge to develop a series of different but individually related e-learning interfaces. Each interface serves a separate but important function in helping the student to develop a series of strategies for interfacing with the university at different levels—the personal, the virtual, and the organisational. The first interface would operate at the level of self-awareness. Here the knowledge and understanding of an individual’s cognitive profile would provide a framework in which that individual can better formulate a series of learning strategies (based on, for example, subject, course, year, semester, unit, etc.). These learning strategies would then become part of the learning resources on which the student can draw. The second interface operates at a more functional level and consists of a Web-based interface for information management purposes. The development of the first interface will help inform the design and development of the second interface. In addition, besides being structured around the individual student’s cognitive profile, the awareness of preferences in terms of the format and content of educational materials helps each student to interact more effectively with learning materials.
RAPAD
The third interface is at the level of the virtual organisation. The techniques associated with Checkland’s Soft Systems Methodology (Checkland & Scholes, 1990), especially rich pictures, root definitions, and conceptual models, are used to help each individual student to locate himself or herself at the centre of an organisational elearning system. Again, the development of the first two interfaces serves to enhance the students understanding of the individual aspects of their own e-learning requirements in the context of the university as e-learning system. The development of the concept of the three interfaces has been the product of several related iterations of the initial study in a series of taught units. A more detailed explanation of the initial research and the theoretical background of the overall research and methodology development are reported more fully elsewhere (Webster, 2002, 2003, 2004, 2005). The use of Web-based technologies and the adoption of these technologies in to personalise corporate computing (Computing: Work-Life Balance, 2007) ensures that RAPAD is well placed to be extended as an adaptive methodology to enhance the process of lifelong learning in the workplace.
reFerences Argyris, C. (2004). Double-loop learning and implementable validity. In H. Tsoukas & N. Mylonopoulos (Eds.), Organizations as knowledge systems: Knowledge, learning, and dynamic capabilities (pp. 29-45). New York: Palgrave Macmillan. Argyris, C., & Schon, D.A. (1996). Organizational learning II. Reading, MA: Addison-Wesley Publishing Company. Avison, D., & Fitzgerald, G. (2003). Information systems development: Methodologies, techniques and tools (3rd ed.). Maidenhead: McGraw-Hill.
Avison, D.E., & Wood-Harper, A.T. (1990). Multiview: An exploration in information systems development. Henley-on-Thames: Alfred Waller. Banathy, B.H. (1996). Systems inquiry and its application in education. In D.H. Jonassen (Ed.), Handbook of research for educational communications and technology (pp. 74-92). New York: Prentice Hall. Banathy, B.H. (1999). Systems thinking in higher education: Learning comes to focus. Systems Research and Behavioral Science, 16, 133-145. Bayne, R. (1995). MBTI: A critical review. London: Chapman & Hall. Biggs, J. (1996). Enhancing teaching through constructive alignment. Higher Education, 32, 347-364. Biggs, J. (1999). Teaching for quality learning at university: What the student does. Buckingham: Open University Press. Biggs, J.B. (2003). Teaching for quality learning at university (2nd ed.). Buckingham: SRHE & Open University Press. Bishop, J. (2006, February). Training talk newsletter. Retrieved October 16, 2007, from http://www. dest.gov.au/sectors/training_skills/publications_ resources/trainingtalk/issue_20/ Blair, A. (2004, May 3). Speech to NAHT conference. Retrieved October 16, 2007, from http:// www.number10.gov.uk/output/Page5730.asp Bransford, J.D., Brown, A.L., & Cocking, R.R. (Eds.). (1999). How people learn: Brain, mind, experience and school. Washington: National Academic Press. Briggs, J. (2003). Rich pictures of UK education. Retrieved October 16, 2007, from http://www. reengage.org/go/Article_111.html Checkland, P. (1981). SystemsThinking, Systems practice. Chichester: John Wiley.
RAPAD
Checkland, P. (2000). Soft systems methodology: A 30-year retrospective. Systems Research and Behavioral Science, 17, S11-S58. Checkland, P., & Holwell, S. (1998). Information, systems and information systems. John Wiley and Sons. Checkland, P., & Scholes, J. (1990). Soft systems methodology in action. John Wiley & Sons. Coffield, F., Moseley, D., Hall, E., & Ecclestone, K. (2004). Learning styles and pedagogy in post16 learning: A systematic and critical review. London: Learning and Skills Research Centre. Computing: Work-Life Balance. (2007). The Economist, 23/12/06-5/1/07, 99-100. DfES. (2003). Widening participation in higher education. London: Department for Education and Skills. Driscoll, M.P. (2002). How people learn (and what technology might have to do with it). ERIC Digest, Syracuse University. Retrieved October 16, 2007, from http://www.ericdigests.org/20033/learn.htm El Louadi, M., Galletta, D.F., & Sampler, J.L. (1998). An empirical validation of a contingency model for information requirements determination. ACM SIGMIS Database archive, 29(3), 31-51. Entwistle, N. (2003). University teaching-learning environments and their influences on student learning: An introduction to the ETL project. In Proceedings of the 10th Conference of the European Association for Research on Learning and Instruction (EARLI). Padova, Italy: EARLI. Entwistle, N., McCune, V., & Hounsell, J. (2002). Approaches to studying and perceptions of university teaching-learning environments: Concepts, measures and preliminary findings. Edinburgh: University of Edinburgh.
Entwistle, N., Tait, H., & McCune, V. (2000). Patterns of response to an approach to studying inventory across contrasting groups and contexts. Paper presented at the European Journal of the Psychology of Education. Ford, P., Goodyear, P., Heseltine, R., Lewis, R., Darby, J., Graves, J., et al. (1996). Managing change in higher education: A learning environment architecture. Society for Research in Higher Education and Open University Press. Goodyear, P. (2001). Effective networked learning in higher education: Notes and guidelines (Deliverable 9) (Vol. 3). Lancaster: CSALT, Lancaster University. Goodyear, P. (2002). Online learning and teaching in the arts and humanities: Reflecting on purposes and design. In E.A. Chambers & K. Lack (Eds.), Online conferencing in the arts and humanities (pp. 1-15). Milton Keynes: Institute of Educational Technology, Open University. Goodyear, P. (2005). Educational design and networked learning: Patterns, pattern languages and design practice. Australasian Journal of Educational Technology, 21(1), 82-101. HUSAT. (1990). The HUFIT planning, analysis and specification toolset. Loughborough: HUSAT Research Institute, Loughborough University. Jonassen, D.H., & Grabowski, B.L. (1993). Handbook of individual differences, learning, and instruction. Hillsdale, NJ: Lawrence Erlbaum Associates. Kassabova, D., & Trounson, R. (2000). Applying soft systems methodology for user centred design. In Proceedings of the NACCQ 2000 (pp. 159-165). Wellington. Knight, P.T. (2001). Complexity and curriculum: A process approach to curriculum-making. Teaching in Higher Education, 6(3), 369-381.
RAPAD
Knight, P.T., & Trowler, P. (2001). Departmental leadership in higher education. Buckingham: Society for Research in Higher Education & Open University Press. Laurillard, D. (2002). Rethinking university teaching: A conversational framework for the effective use of learning technologies (2nd ed.). London: Routledge. Laurillard, D. (1993). Rethinking University Teaching: a framework for the effective use of educational technology. London: Routledge. Laurillard, D. (1999). A conversational framework for individual learning applied to the ‘learning organisation’ and the ‘learning society’, systems research and behavioral science (vol. 16, pp. 113-122).
Onions, C.T. (Ed.). (1983). The shorter Oxford English dictionary (3rd ed.). Oxford: Oxford University Press. Patel, N.V. (1995). Application of soft systems methodology to the real world process of teaching and learning. International Journal of Educational Management, 9(1), 13-23. Pejtersen, A.M. (1989). The BOOKHOUSE: An icon based database system for fiction retrieval in public libraries. In Proceedings of 7th Nordic Information and Documentation Conference, Århus, Denmark. Peterson, E. R., Deary, I. J., & Austin, E. J. (2003). The reliability of Riding’s Cognitive Style Analysis test. Personality and Individual Differences, 34, 881-891.
Marton, F., & Ramsden, P. (1988). What does it take to improve learning? In P. Ramsden (Ed.), Improving learning: New perspectives. London: Kogan Page.
Pillay, H. (1998). An investigation of the effect of individual cognitive preferences on learning through computer-based instruction. Educational Psychology, 18(2), 171-182.
McCune, V. (2003). Promoting high-quality learning: Perspectives from the ETL project. In Proceedings: 14th Conference on University and College Pedagogy. Fredrikstad: Norwegian Network in Higher Education.
Preece, J., Rogers, Y., & Sharp, H. (2002). Interaction design: Beyond human computer interaction. Wiley.
Metros, S.E., & Bennett, K. (2002). Learning objects in higher education. Educause Research Bulletin, 19, 2. Retrieved October 16, 2007, from www.educause.edu/ir/library/pdf/ERB0219.pdf Monk, A., & Howard, S. (1998, March-April). The rich picture: A tool for reasoning about work context. Interactions, 21-30. Myers, I.B., McCaulley, M.H., Quenk, N.I., & Hammer, A.L. (1999). MBTI manual: A guide to the development and use of the Myers-Briggs Type Indicator. Paolo Alto, CA: Consulting Psychologist Press. Nowack, K. (1996). Is the Myers Briggs Type Indicator the right tool to use? Performance in Practice, 6.
Riding, R., & Rayner, S. (1998). Cognitive styles and learning strategies: Understanding style differences in learning and behaviour. London: David Fulton Publishers. Schön, D.A. (1971). Beyond the stable state: Public and private learning in a changing society. Temple Smith. Schön, D.A. (1983). The reflective practitioner. NewYork: Basic Books. Schön, D.A. (1987). Educating the reflective practitioner: Toward a new design for teaching and learning in the professions. San Francisco: Jossey-Bass Publishers. Schön, D.A. (1991). The reflective turn: Case studies in and on educational practice. New York: Teachers Press, Columbia University.
RAPAD
Shuell, T. (1992). Designing instructional computing systems for meaningful learning. In M. Jones & P. Winne (Eds.), Adaptive learning environments. New York: Springer Verlag. Simons, R. J., van der Linden, J., & Duffy, T. (Eds.). (2000). New learning. Dordrecht: Kluwer Academic. Summerville, J. (1999). Role of awareness of cognitive style in hypermedia. International Journal of Educational Technology, 1. Tait, H., Entwistle, N.J., & McCune, V. (1998). ASSIST: A reconceptualisation of the approaches to studying inventory. In C. Rust (Ed.), Improving student learning (pp. 262-271). Oxford: Oxford Centre for Staff and Learning Development. Trow, M. (1973). Problems in the transition from elite to mass higher education. Berkley, CA: Carnegie Commission on Higher Education. Trowler, P., & Knight, P.T. (2000). Coming to know in higher education: Theorising faculty entry to new work contexts. Higher Education Research & Development, 19(1). Trowler, P., Saunders, M., & Knight, P.T. (2003). Change thinking, change practices: A guide to change for heads of department, programme leaders and other change agents in higher education. Learning and Teaching Support Network, Generic Centre. Twigg, C.A. (1994). The changing definition of learning. Educom Review, 29(4). Vermetten, Y.J., Vermunt, J.D., & Lodewijks, H.G. (2002). Powerful learning environments? How university students differ in their response to instructional measures. Learning and Instruction, 12, 263-284. Vermunt, J.D. (1998). The regulation of constructive learning processes. British Journal of Educational Psychology, 67, 149-171.
Vora, P. (1998). Human factors methodology for designing Web sites. In C. Forsythe, E. Grose & J. Ratner (Eds.), Human factors and Web development. Hillsdale, NJ: Lawrence Erlbaum.
Webster, W.R. (2002, July). Metacognition and the autonomous learner: Student reflections on cognitive profiles and learning environment development. In A. Goody (Ed.), Spheres of influence: Ventures and visions in educational development. Proceedings of ICED 2002, UWA, Perth, Australia: University of Western Australia. Webster, W.R. (2003). Cognitive styles, metacognition and the design of e-learning environments. In F. Albalooshi (Ed.), Virtual education: Cases in teaching and learning (pp. 225-240). Hershey, PA: Idea Group Publishing. Webster, W.R. (2004, November 2-3). A learnercentred methodology for learning environment design and development. In Exploring integrated learning environments. Proceedings, Online Learning and Training 2004, Brisbane. Brisbane, Australia: Queensland University of Technology. Webster, W.R. (2005). A reflective and participatory approach to the design of personalised learning environments. Unpublished PhD Thesis, Lancaster, Lancaster University. Weil, S. (1999). Re-creating universities for beyond the stable state: From dearingesque systematic control to post-dearing systemic learning and inquiry. Systems Research and Behavioral Science, 16, 170-190. Wilson, B.G. (1996). What is a constructivist learning environment? In B.G. Wilson (Ed.), Constructivist learning environments: Case studies in instructional design (pp. 3-8). Educational Technology Publications.
RAPAD
AddItIonAL reAdIngs Goodyear, P. (2002). Online learning and teaching in the arts and humanities: Reflecting on purposes and design. In E.A. Chambers & K. Lack (Eds.), Online conferencing in the arts and humanities (pp. 1-15). Milton Keynes: Institute of Educational Technology, Open University. Haag, S., Cummings, M., & McCubbery, D.J. (2004). Management information systems for the information age (4th ed.). Boston: McGrawHill. Riding R., & Rayner, S.G. (Eds.). International perspectives on individual differences: Cognitive styles (Vol. 1). Stamford: Ablex Publishing Corporation.
Sternberg, R.J., & Zhang, L.F. (Eds.). (2001). Perspectives on thinking, learning and cognitive styles. Mahwah, NJ: Lawrence Erlbaum Associates. Tsoukas, H., & Mylonopoulos, N. (Eds.). Organizations as knowledge systems: Knowledge, learning, and dynamic capabilities. New York: Palgrave Macmillan. Wierstra, R.F.A., Kanselaar, G., Van Der Linden, J.L., Lodewijks, H.G.L.C., & Vermunt, J.D. (2003). The impact of the university context on European students’ learning approaches and learning environment preferences. Higher Education, 45, 503-523.
0
Chapter II
A Heideggerian View on E-Learning Sergio Vasquez Bronfman ESCP-EAP (European School of Management), France
ABstrAct This chapter introduces some ideas of the German philosopher Martin Heidegger and how they can be applied to e-learning design. It argues that heideggerian thinking (in particular the interpretation done by Hubert Dreyfus) can inspire innovations in e-learning design and implementation by putting practice at the center of knowledge creation, which in the case of professional and corporate education are real work situations. It also points out the limits of distance learning imposed by the nature of human beings. Furthermore, the author hope that Heidegger ideas will not only inform researchers of a better design for e-learning projects, but also illuminate practitioners on how to design e-learning courses aimed at bridging the gap between “knowing” and “doing.”
IntroductIon In the field of professional, continuous, and corporate education (PCCE)1 there is a recurrent complaint concerning the effectiveness of the educational process (Mintzberg, 1988, 1996, 2004; Schön, 1983). Effectiveness is “the ability of a system to produce what it must produce.” Therefore, in an effective PCCE system people should learn to do what they must do when working in their companies. Unfortunately this is not what
one can observe; actual PCCE systems produce people who get a lot of knowledge but who are unable to put it into practice. One of the main reasons for this knowing-doing gap (Pfeffer & Sutton, 2000) is what I call infocentrism, which is a wrong interpretation of what learning is. Infocentrism says that learning is a kind of information system: knowledge is transmitted to learners through lectures and/or accessed through readings, learners must retain this knowledge, and finally professors organize
Copyright © 2008, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
A Heideggerian View on E-Learning
tests of knowledge retention that we call exams.2 In good educational settings, exercises and case studies are also performed. Implicitly, the infocentric perspective makes the hypothesis that if knowledge is transmitted properly (i.e., lectures are clear) then application (practice) is obvious. In fact this hypothesis is falsified. Hence the knowing-doing gap comes into existence. As a business school professor and a practitioner I am committed in research and practice to bridge this gap. I think that my professional activity is an opportunity for innovation and hence I design and implement educational experiences to add value to professional, continuous, and corporate education, by using information and communication technologies (ICT) and applying innovative pedagogical methods. In this journey—thanks to Fernando Flores and his collaborators (Spinosa, Flores, & Dreyfus, 1997; Winograd & Flores, 1986)—I have been in contact with the ideas of Martin Heidegger whose philosophy I have found one of the most valuable for learning innovation (including the use of information and communication technologies to add value to learning). With the exception of the seminal work of Hubert Dreyfus, little research has been done on the impact of Heidegger philosophy on learning and e-learning.3 Now, this impact could inspire important innovations or, at least, an accurate interpretation of learning, hence resulting in good e-learning design and implementation. The purpose of this chapter is then to show how Heidegger ideas can illuminate learning innovation (including e-learning) and, in particular, help to bridge the knowing-doing gap.
heIdegger PhILosoPhy And Its ImPLIcAtIons For LeArnIng Martin Heidegger (1889-1976) was a very influential German philosopher, probably the most influential one of the 20th century. His work
dealt with many topics, but in this chapter I will focus on his thinking about human activity and the relationship between theory and practice. My argumentation is based on the work done by Hubert Dreyfus, one of the best specialists on Heidegger philosophy and who has made important contributions applying Heideggerian thinking to learning and artificial intelligence (Dreyfus, 1986, 1991, 1992, 2001). The essence of Heidegger thinking is that Western philosophy, from Plato onward, has misunderstood the nature of being. In particular, he argues that metaphysical and scientific theories have tended to favour all questions about being into categories, better suited to describe the detached contemplation of inert objects. As a result, according to Heidegger, philosophers and scientists have overlooked the more basic, pretheoretical ways of being from which their theories derive, and, in applying those theories universally, have confused our understanding of human existence. To avoid these deep-rooted misconceptions, Heidegger believes he must restart philosophical inquiry in a different way, using a novel vocabulary and undertaking an extended criticism of the history of philosophy (Wikipedia, 2006) Heidegger says that our everyday action is rooted in the ability to act pre-reflectively when we are thrown in a situation.4 Most of the time our lives happen this way (to dress ourselves, to go to a place or another, to work, to eat, etc.). We do not think, we just do, and we cope with the situation. There is only some little fraction of time where life happens in the conscious and deliberate way of doing. Heidegger is not against theory. He says that theory is an important and powerful instrument, but a limited one, only a subset of the way human beings cope with things. In particular, Heidegger observes that in order to do something, even a high level cognitive action, we do not need to have a theory of the domain in which we are taking action.5 He also says that it is impossible to have a theory about what makes theory possible. If he
A Heideggerian View on E-Learning
is right, his analysis question one of the most important postulates in Western philosophy, which finds its roots in Descartes and until Plato: that human activity can be explained with theories, that human beings are conscious subjects who observe objects, and that a theoretical and detached perspective is better than a practical and involved one. Instead of this, Heidegger says that a theoretical and detached knowledge imply a practical and involved know-how, which precedes the theoretical knowledge and that cannot be explained by him. Even theoretical knowledge depends on practical skills. The detached knower should then be replaced by an involved doer. As human beings, our relationship with “things” is always purposeful. Heidegger says that we do not find “simple things”; rather we use things in order to achieve something. Heidegger call these things “equipment,” in a very large sense which includes tools, material, clothes, toys, machines, houses, and so forth. The fundamental characteristic of the equipment is its purposeful use; in fact, Heidegger defines a piece of equipment in terms of its purposeful use. When everything is working well, equipment is characterized by its transparency: it is “ready at hand.” Heidegger called this availableness (zuhandenheit). However, when we face a breakdown (i.e., a “surprise”), when something is not “ready at hand,” we move to what Heidegger calls occurrentness (vorhandenheit). According to Dreyfus there are some stages in this move, going from conspicuousness (a short breakdown, easily repaired), to obstinacy (which implies stop and think, planned reflection—“what ifs,” “if-thenelses,” and so forth—all of this in a context of involved action), and to obstrusiveness (detached theoretical reflection). The basic postulates of Western philosophy have had dramatic consequences in learning and teaching. As a matter of fact, in every discipline people try to find context-free elements, basic concepts, attributes, and so forth, and relate
them through “laws” (as in natural sciences) or rules and procedures (as in structuralism and cognitivism). Therefore, teaching follows this path: we first present theory (the more abstract it is, the better) and then applications (examples, exercises, or case studies), giving raise to the knowing-doing gap. Moreover, when talking about learning, people usually confuse different kinds of learning. In order to clarify the discussion I will make some important distinctions on learning.6 • Learn about, for example, negotiation, communication, history, medicine, software design, and so forth • Learn to do, for example, how to negotiate, how to communicate well, how to run a research in history, how to diagnose illnesses, how to design software, and so forth • Learn to be, for example, a negotiator, a communicator, a researcher in the field of history, a doctor, a software designer, and so forth One can love history and be interested in medicine or in human communication. By reading books on these topics, attending conferences, doing courses (online or face-to-face), and so forth, one can learn a lot about history, medicine, and human communication, but that does not mean that one will be able to conduct research in history, to diagnose illnesses, or to communicate effectively. In other words, one will not be able to do. Following the same logic, if one has been successfully conducting a first piece of research in history, has diagnosed some simple illnesses, or has solved a communicational problem, that does not mean that one will be considered a historian, a doctor, or a professional in the field of human communication. In other words, one will be able to do, but one will not yet be (a professional recognized as such by his/her peers). In order to reach this level one must have a significant amount of practice in the appropriate community
A Heideggerian View on E-Learning
(negotiators, communicators, historians, doctors, software designers, etc.). Bearing in mind these learning distinctions, it can be said that one of the causes of the knowing-doing gap is that the vast majority of the educational offer satisfy only the “learn about” kind of learning, and that many people expect at least “learn to do.” Because educational practices needed in order to “learn about” are not sufficient when one needs “learn to do,” there is an important discrepancy between supply and demand in education. As I have said, Heidegger philosophy can help us to give new insights in order to design innovative educational offers. Summarizing Heidegger contributions, one can say that good learning design should: •
•
• •
•
To throw the learner in the situation he/she must know which is the context for the practice that must be mastered Always start by local and concrete examples and/or involved practice, and then move gradually to detached reflection Design situations where one must deal with breakdowns Make available a vast repertoire of situations, cases, and so forth, on the topic to be learned Design or use technology that is “ready at hand,” that is, easy to use, transparent7
For instance, in order to improve “learn about...” teaching practices, one should not start by presenting definitions of the topic to be studied, and then move from general abstractions to particular situations, from theory to practice. Instead of that, one should do exactly the opposite: start by presenting particular stories where the studied phenomenon shows itself (because our encounter with a new phenomenon happens always through particular and concrete examples), and then move to a definition of the “thing,” hence going from practice to theory, from concrete examples to abstract definitions.
An example of the above is a series of speeches that I have designed for the IT Department of a Spanish petroleum company. The concern of the chief information officer (CIO) was to improve the technical culture of the computer professionals and to share the accumulated knowledge across the different specialities of the department. For instance, in designing the speech on networks and telecommunications we started by a very concrete situation for all of the targeted IT professionals: when one sends a message from Building A to Building B,8 what happens (technically speaking)? The presentation described step by step the different technologies involved in this process: computers, servers, switches, routers, optical fibre, and so forth, then showed the different IP addresses of all the elements included in this network, and continued moving from significant examples (significant for the IT professionals of this company) to definitions and again to examples. All of the other presentations were designed this way. The result was a good evaluation and a significant attendance of the IT professionals to the series of speeches, while these attendance and evaluation were poor in the past. In the next section I will present a model for the two others kinds of learning: “learn to do...” and “learn to be...”; in other words, learning a skill and learning to be a professional.
PhenomenoLogy oF LeArnIng A skILL Hubert Dreyfus has done a major contribution describing the process for which one is able to learn and master a skill. According to Dreyfus, this process is always a kind of apprenticeship. In the first presentation of his model (Dreyfus, 1986), he distinguished five levels: novice, advanced beginner, competence, proficiency, and expertise. Later, he added two more levels: mastery and practical wisdom (Dreyfus, 2001). Inspired by his work, I will present a simplified model of
A Heideggerian View on E-Learning
apprenticeship with only three levels: beginner, competent, and expert.
Beginner The instruction process should, of course, start throwing the learner in a situation close to the real work situation the learner will be in. The instructor gives learners the information (facts, rules, procedures) they need to cope with the situation and coach the learners. In corporate learning (i.e., learning programmes especially designed for only one company), and in executive education, one can benefit from the experience of the learners and work with their situations instead of giving them case studies that bring practice to the classroom but which is not students’ own practice. Therefore, the situations where students should be thrown must be based in the everyday coping of the learners with the situation, that is, the way they cope every day with some subject or topic, the way they face it every day at the workplace.9 In undergraduate education, where students do not have professional experience, one should move to traditional case studies, role playing, and/or computer simulations. In any case, a learner should be thrown in a situation which is significant to him/her (e.g., work situations) in order to provoke emotions and involvement, which is also necessary to bridge the knowing-doing gap because at work we experiment emotions and involvement.10 For instance, when learning to use a technology (e.g., a software), the designer must create situations focused on the purpose of the use of this technology for the learner, and throw the learner in these situations. If unemployed people should learn how to use a software like Word, traditional learning design will present all of the functions of the software and then move to applications. A Heideggerian-based design will rather ask the learners to write a curriculum vitae (which is probably the main purpose of using a
word processor for an unemployed person) and, in this process, make him learn the main functionalities of Word. Another example is the e-learning courses I have designed for the new employees of la Caixa, the most important Spanish savings bank. Using participative course design methods, the design team worked with end-users of the courses, that is, new employees and their managers. For instance, when designing a course on insurance, we asked them: what is the everyday coping of la Caixa’s new employees on insurance? The answer helped us to focus on the skills that new employees must come to master when dealing with insurance (for instance, to sell insurance that takes care of customers’ concerns). Then we asked for recurrent situations faced by the new employees in this field, which lead us to write a sequence of mini cases. At the end of each mini case learners have to answer questions like: “What would you do in this situation?” “What kind of products can you offer to this client?” “What would be your advice to this customer?” and so forth. Answers must generally be sent to a forum for discussion with the online classroom colleagues, moderated by their online trainer (which is a branch manager). Relevant information in order to perform these activities is suggested to learners (which they can access on the Web pages of the courses). All of the courses were structured as a series of mini-cases. The learner must start always with a case (which thrown him/her in a situation based on the everyday coping of la Caixa employees with the topic of the course), therefore being concrete examples of involved practice. The material they can access in order to perform these activities gives them definitions and general knowledge they can apply to different particular situations.
competent As the learner becomes competent in coping with “normal” situations, in applying general rules and procedures to particular situations, the instructor
A Heideggerian View on E-Learning
should move to a different kind of situations and make the learner cope with breakdowns: something produces unexpected results, an error resists correction, or we begin to look at something in a new way.11 The learner response to the situation, based on rules and procedures, will gradually be replaced by situational discriminations accompanied by associated responses. In the beginning of this level, the learner will have mainly reasoned responses, will need to “stop and think,” but as he/she becomes really competent, intuitive behaviour will gradually take place. In order to become competent, the instructor must then design situations that provoke breakdowns. In this sense, role-playing could be better than case studies, especially in undergraduate education. For instance, in courses like communication or negotiation, one can easily imagine role plays where learners face “surprises.” Course sessions can therefore start by role plays and then reflect on the observed behaviours; the instructor can use these reflective moments to present theories, concepts, methods and techniques. In corporate learning and executive education, the instructor should, in addition to the above, design discussions of learners’ breakdowns at work (Mintzberg, 2004), and use action learning techniques (Pedler, 1991; Revans, 1980). Action learning was invented by Reginald Revans when he was leading the training department at the National Coal Board in the United Kingdom. It is based on two important points: (a) work on the real problems faced by learners, and (b) work on problems where there is confusion, ignorance, where nobody has the answer. This is done in a “learning set,” that is a group of 5-8 people whose main goal is to learn from their own experience through questioning and reflecting. The group decides on the common problem/ opportunity on which to work. People look for new interpretations, new ways of settling the problem/opportunity. A good guide to doing this is to work on the following questions:
• What am I trying to do? • What is stopping me from doing it? What is the problem? • What action will I take in order to overcome the obstacles? It is in this process that people learn from each other and create new knowledge. In this school of thought, learning involves programmed knowledge (knowledge one gets from outside the set through lectures, seminars, books, etc.), but the majority of the learning occurs through fresh questions that help the person addressing the problem to look at it in different ways so that better solutions can be found. Another important point here is that learning means implementation (stopping at the analysis and recommendations phase will not be sufficient). Action learning is then a cyclical process: it starts with problem discussion; people look for new ways of seeing the problem, finding solutions, implementing solutions, and observing results, and the process starts again with the discussion of problems with implementation. One can also say that in applying action learning techniques and/or the learning methods suggested by Mintzberg, instructors are promoting “reflection-on-action.” Elsewhere (Vasquez Bronfman, 2005) I have shown the parallels of a Heideggerian view of learning with Donald Schön’s interpretation of reflection, which is quite different of the traditional interpretation of the concept as a detached way of knowing. On the basis of his observations of the artistry showed by competent practitioners, Schön propose two fundamental concepts in order to explain this artistry. These are knowing-in-action on the one hand, reflection-in-action and reflection-on-action on the other hand (Schön, 1983, 1987). Knowing-in-action refers to the know-how revealed in our daily action when doing our jobs, for example, the instant analysis of a balance sheet. According to Schön, there are in fact many actions we perform spontaneously, without hav-
A Heideggerian View on E-Learning
ing to think on them. Often, we are not aware of having learned to perform these actions. “Even if sometimes we think before the action, it is still true that most of the time our spontaneous behaviour concerning practical skills does not come from a previous intellectual operation. Nonetheless, we show a kind of knowledge” (Schön, 1996). Our knowledge-in-action allows us to cope with daily life. However, sometimes we experience “surprises,” either good or bad. An error in a computer programme resists correction, the outputs of an advertising TV spot are much more better than expected, a carefully designed information system is rejected by its users, and so forth. Something unexpected reveals then to us. In Schön’s interpretation, “reflection” starts when there is a surprise (in other words, when there is a breakdown): something produces unexpected results, and/or we begin to look at something in a new way. We may respond to this situation by reflection and we may do so in one of two ways. We may reflect on action, thinking back on what we have done in order to discover the causes of the unexpected outcome (stop-and-think). And we may reflect in action, that is, in the midst of action without interrupting it, carrying out on-thespot experiments to change the situation, “thinking on our feet.” The point for reflection-in-action is that we can think about something while doing it, it is the capacity to respond to surprise through improvisation on the spot. Table 1 summarises the parallels between Heidegger philosophy and Schön’s interpretation of reflection.
expert The competent performer, immersed in the world of skillful activity, sees what needs to be done, but still has to decide consciously how to do it. In front of a breakdown, the expert not only sees what needs to be achieved but, thanks to a vast repertoire of situational discriminations, but he/ she sees immediately what needs to be done and simply takes action (Dreyfus, 2001). In Donald Schön’s words, a competent performer still needs to reflect-on-action while the expert is able to reflect-in-action. In a delightful description of the phenomenon, Sir Arthur Conan Doyle gets the point in his first Sherlock Holmes novel Study in Scarlet (Chapter 2, The Science of Deduction): From long habit the train of thoughts ran so swiftly through my mind that I arrived at the conclusion without being conscious of intermediate steps. There were such steps, however. The train of reasoning ran, “Here is a gentleman of a medical type, but with the air of a military man. Clearly an army doctor, then. He has just come from the tropics, for his face is dark, and that is not the natural tint of his skin, for his wrists are fair. He has undergone hardship and sickness, as his haggard face says clearly. His left arm has been injured. He holds it in a stiff and unnatural manner. Where in the tropics could an English army doctor have seen much hardship and got his arm wounded? Clearly in Afghanistan.” The whole train of thought did not occupy a second. I then
Table 1.
What happens
Schön’s interpretation
Heidegger’s interpretation
No breakdowns, no surprise
Knowing-in-action
Absorbed coping, availableness
Short breakdown
Reflection-in-action
Conspicuousness
Persistent breakdown
Reflection-on-action
Obstinacy, occurrentness
Flaw
Reflection-on-action
Obstrusiveness, occurrentness
A Heideggerian View on E-Learning
remarked that you came from Afghanistan, and you were astonished. The question is then how to train to become an expert. First, we believe that an expert cannot be trained only in a classroom. To become an expert one must have a significant professional experience, where one has been coping with many different situations, in particular situations leading to breakdowns. In addition to that, one must have an impressive record of cases in one’s profession (other people practices). Also, it is necessary to study with a master in order to imitate his/her actions and to “steal” part of his/her knowledge (Brown & Duguid, 1996). In order to do this, the best way is to be (again) thrown in daily work situations where one can work side by side with a master—a good manager, a chief engineer, an experienced technician, a senior scientist, a well known artist, and so forth—and look at the master way of doing. What the classroom can do for learning expertise is (a) to provide an important collection of external practices in order to enrich the expert’s repertoire of situations (a repertoire of cases allow people to make situational discriminations while being in action), and (b) a place to reflect on practice with peers. In corporate learning, communities of practice are the best candidates to provide the above (Wenger, 1998; Wenger, McDermott, & Snyder, 2002). When a learner moves from “competence” to “expertise” the learner is also moving from “learning to do...” to “learning to be...”. I think that in order to “learn to be...” one must go beyond teaching. The works of Jean Lave and Etienne Wenger (Lave & Wenger, 1991) and John Seely Brown and Paul Duguid (Brown & Duguid, 1991, 2002), have clearly shown that learning is a social process. Moreover, this kind of learning takes place in a situated action (in space and time). The above authors make a breakthrough in the theory of learning by shifting the focus from the individual as learner to learning as participation
in the social world, from a cognitive process to a social practice. All of this means that nobody can master a job and become an expert outside of a community of practitioners. If one wants to learn the job of a doctor (i.e., to learn to be a doctor), one must practice inside a community of doctors; if one wants to become an entrepreneur, one must practice entrepreneurship inside of a community of entrepreneurs. Lave and Wenger (1991) created the concept of legitimate peripheral participation (LPP) to draw attention to the point that learners inevitably participate (more or less) in communities of practitioners and that the mastery of knowledge requires newcomers to move toward full participation in the socio-cultural practices of a community. Therefore, they stress the point that, in order to facilitate learning, one must create an environment that facilitates LPP, facilitates access to practice, access to ongoing work activities, and access to practical expertise. Building on situated learning, Etienne Wenger (1998) developed the concept of communities of practice, which are informal structures that gather people linked through a common practice, which is also recurrent and stable in time. Communities of practice always develop around what matters to its members; therefore, if one wants to facilitate LPP and to “learn to be,” a community of practice is a good candidate. Following this logic, la Caixa has started to cultivate some emergent communities of practice, for instance communities of branch managers. There is a big online community where branch managers have discussions on their ongoing problems at work, thus sharing their practical knowledge. And there are some local communities of practice where branch managers meet online and face-to-face. Participation in communities of practice will certainly allow for being thrown in the specific situations and context of the professional practice one wants to master. Also, it will give access to a
A Heideggerian View on E-Learning
vast repertoire of cases on the topic to be learned. However, if the participation in communities of practice is obviously a good way of professional development, that doesn’t mean that it will allow per se a given professional to become an expert. It can of course help, but in order to become an expert one must also work with a master and have the will to continuously improve one’s professional practice which cannot be reached only by sharing knowledge with peers.
Added-vALue And LImIts oF Ict For the dIFFerent kInds oF LeArnIng Following Heidegger, when thinking at ICT as a tool to enhance learning, the educational designer must ensure that tools will be ready-at-hand. That means that ICT should essentially be “easy to use.” For instance, learners should not need to change tools in order to access discussions at a distance: in this sense, using e-mail in order to participate in a community of practice could be better than a dedicated platform, except if every contribution to the online discussion is routed to the learners’ e-mail box. But technology is not only a tool. Its impressive power comes usually from its disrupting characteristic of being a possibilities opener. Therefore, above all, the educational designer must ask himself: which new possibilities are this technology opening in order to support/enhance learning? Or, more precisely, which new possibilities are ICT opening in order to support/enhance heideggerian based learning? As long as one address only the “learn about...” kind of learning, ICT can always support and enhance learning: to calculate quickly, to draw and redraw, to accelerate and to slow time hence seeing what is otherwise impossible to see, and so forth, all of this allowing to create microworlds where one can experiment without risks (Papert, 1993). In particular, one of the most powerful
characteristics of ICT is the possibility to access information “anywhere anytime.” If one wants to learn about cosmology, one can easily imagine oneself accessing resources like e-books, articles, simulations, interviews of well known cosmologists, and documentaries on the topic. If one wants to learn a skill (i.e., “learn to do...” and “learn to be...”), ICT can still support and enhance learning but with some nuances. In the classroom, ICT can always open possibilities in order to support and enhance learning. But if we move to distance learning, things are much more difficult because of the role of the body in learning. In order to clarify the discussion, I will present examples of a motor skill (e.g., learning to play soccer, karate, or learning to dance), and a cognitive skill (communication, negotiation, economy, information systems implementation, etc.). At level “Beginner,” if one wants to learn a motor skill, ICT can support learning by giving access to videos and documentaries especially designed for this purpose, including exercises and sequences showing particular aspects and techniques. A good example of this is Jane Fonda’s famous videos on aerobic dancing. Also, a learner could be filmed and then watch and discuss this recording with a distant coach in order to look for what needs to be improved. The same applies if one wants to start learning a martial art like karate, or improve his soccer techniques. However, it seems obvious that in order to learn to play soccer, one must play with other players and that it is impossible to do this at a distance. Playing soccer is not the same as playing soccer with a Play Station! If one wants start to learn a cognitive skill, ICT can allow the learner to access at a distance a series of well designed cases that will throw the learner in the proper situations, give the learner access to rules and procedures, and allow the learner to experiment with emotions and involvement. Thanks to computer-based cooperative work tools, the distant professor can also organise discussions
A Heideggerian View on E-Learning
on the cases. Obviously, at this level ICT can also enhance classroom teaching. Examples are: • The Technology Enhanced Active Learning (TEAL) project at the Massachusetts Institute of Technology (MIT), where students learn physics moving seamlessly between non traditional lecture, hands-on experiments, and discussion. Classrooms consist of 13 tables with 9 students per table. Most of the student work involves building, running, and experimenting with simulation models and then solving problems. No traditional lecture takes place; rather, professors and their teaching assistants walk around from table to table, see what interesting issues are unfolding, and occasionally interrupt the entire class to discuss something that a particular table is encountering (Brown, 2005). In particular, TEAL provides impressive media-rich visualizations and simulations delivered via laptops and the Internet that allows students to “see” what is otherwise impossible to see: electromagnetism, electrostatics, and so forth. By doing this, the whole TEAL system throws students in the context of research in physics; also, every session start by concrete examples and involved practice. • A CD-ROM designed in a school of the Chamber of Commerce and Industry of Paris, which help students in preparing their business English exam. During the exam the students must read an article from the business press (Business Week, The Economist, Financial Times, Fortune, etc.) and then summarise the text in a discussion with the professor. Only 10 minutes are allowed to the student to read and understand the article and prepare the discussion that lasts for other 10 minutes. The CD-ROM contains a random selection of ad hoc articles and has a dictionary that allow for rapid consultation
of the most difficult words. An important feature is that the article disappears from the screen after 10 minutes. In other words, the programme throws the student in the same situation the student will be in during the exam and helps to prepare the discussion; also it gives the students a vast repertoire of cases on the topic to be learned. Technology is “easy to use,” “ready at hand.” • The Practicum in Law at the Open University of Catalonia is an online simulation of the practical training that students must run in law firms. The students access a simulated office (with tables, chairs, computers, telephones, law books, and a virtual boss) where there is some work to do. The virtual boss asks something of the student by letting messages on the virtual table which, after clicking on, the student can read. All of the documentation necessary to do what is requested is available. The student must do the requested work, fill the documents, and send them to the boss (in fact, a professor) who will comment and suggest actions to take. The process continues until the work is completely done. Again, the system throws the learner in the situation where the learner must know and, in the context of the practice, must master, always starting by local and concrete examples. At level “Competent,” if one wants to learn a motor skill, it is necessary to enter a face-to-face apprenticeship. At this level, mastering karate needs a significant experience in fighting because it is in fights (and not in exercises) where one will be confronted to breakdowns (caused by the opponent). Furthermore, the same applies when learning a dance and a collective sport: ICT can only help to record the learner movements and separate it into its elements in order to analyse errors, as it is done with high performance athletes. In learning a cognitive skill at this level, ICT can still support significant enhancements,
A Heideggerian View on E-Learning
because ICT-based scenarios can throw students in situations where they will be confronted to breakdowns. Examples of this are:
growltiger This is a software for simulating structures in civil engineering at MIT. This programme was conceived in the beginning as a design tool but quickly became a very powerful learning tool. It incorporated a finite element algorithm for studying equilibrium forces. Students could draw on the screen a structure such as a beam of a truss for a bridge, specify the materials and the dimensions, then lead the bridge, and the programme showed them deflections, moment diagrams, and so forth. Students could simulate the structure’s behaviour under different load conditions, explore the space of possible bridge designs, and find “surprises” in this process. We can see here reflection-in-action: “interacting with the model, getting surprising results, trying to make sense of the results, and then inventing new strategies of action on the basis of this new interpretation. Students could iterate very quickly with this design tool” (Schön, 1996). In Heidegger’s words, Growltiger helps to design situations where one must deal with breakdowns; also, it gives the student a vast repertoire of situations, cases, and so forth, on the topic to be learned.
walking in the Fog This is a case study on IT project implementation that I have implemented in different university settings, both face-to-face and online. The case—called NetActive City—tells the story of the implementation of a virtual school of entrepreneurs and a virtual incubator. The main point is that the case is given in several “parts” and the situation change as time goes by (as it happens in real life!). As a matter of fact, the question of time is a fundamental one but, unfortunately,
0
never properly handled in PCCE. Case studies usually give all the information in one time but it never happens like that in real life. In doing this, traditional case studies train students to analyse facts of the past rather than to cope with present situations. By giving the case in several parts, one can create breakdowns hence training students to respond to the changes and to reframe the problem in the light of new information on the situation. In his book Testaments Betrayed: An Essay in Nine Parts, Milan Kundera summarises brilliantly this question of time and our ontological impossibility to know the future. In the chapter “Paths in the Fog” he says that man proceeds in the present always as the one who walk in the fog: unsure of what the next moment may bring. Walking in the fog one can see the edge of the path, what happens near, and react, and one can see 50 meters ahead, but not beyond. This fundamental truth should be scenarised in our case studies: instead of training students in the illusion of rigorous planning based on data, one should train them to work with uncertainty and breakdowns. In other words, one must put fog in case studies. Online learning open new possibilities to do this. Case studies can then take place in some short “chapters” where the professor gives new information on the situation, hence changing it and asking students: “What will you do now?” Organising an asynchronous discussion will allow for reflection-on-action, while running a discussion with a small group using synchronous discussion tools (e.g., a chat room) will force reflection-in-action. In both cases the instructor can design situations where one must deal with breakdowns.
A Blended Learning course on communication Another example of a course aimed at learning from breakdowns is the one designed on commu-
A Heideggerian View on E-Learning
nication for la Caixa employees.12 The course is structured in six learning units, every unit having the same structure as below: • First, trainees read a story (a mini-case that tells a story with a breakdown, a surprise, that can be interpreted in terms of human communication) and participate in an online discussion of this story in a forum. The mini cases create not only breakdowns but also emotional involvement because the stories are real stories of what happens in the daily work at this savings bank. • Second, trainees are encouraged to access some readings on communication theory that allow for a new interpretation of the story. Then follows an online discussion of participants’ own examples of the same kind of story. • Third, following a given procedure, trainees must run a face-to-face exercise on human communication (with a colleague, a friend, etc.), then report the results via e-mail, and finally participate in an online discussion on what happened in this exercise. • Finally, trainees must write an evaluation report of the above exercises, in the light of what they have learned. As we can see, this course is not a completely online course. Participants must do some face-toface activities. This is because human communication is an embodied phenomenon. As human beings, we are not like minds in a vat; we have bodies and our bodies play an important role in the communication process. Therefore, if one wants to learn to communicate (which is not the same than to learn about communication) one must also train the body to communicate and reflect on what happens to the body in the face-to-face exercises. Moreover, we think that in human communication courses face-to-face exercises are the only way to allow people to reflect in action, the
subsequent online discussions allowing them to reflect on action. Applying Heidegger ideas, one can see that in this course the learner is thrown in the situations where the learner must know or which is the context for the practice that must be mastered. Every learning unit starts by local and concrete examples (those of the bank) and then move gradually to detached reflection; also, the situations are designed in order to deal with breakdowns. At level “expert,” if one is learning a motor skill almost nothing can be done with ICT in order to support or enhance learning. To reach this level, one must train the body to respond skilfully to different situations, and this can only be done by practising the skill; more precisely, by putting the body to practise the skill. In the case of a cognitive skill, the best way to become an expert is still to practise the skill again and again, under the guidance of a master, and to acquire a vast repertoire of cases. ICT can help to do the later by giving access to a lot of material and allowing participation in virtual communities of practice and/or in virtual learning sets. However, the density of interaction in learning sets—learner/learner interaction and learner/instructor interaction—is usually very high; therefore it is difficult to have good discussions at a distance. It is better to run it face-to-face, in a classroom or in a workroom. Moreover, in order to imitate a master’s actions, one needs to work side by side with the master, because in order to experience how to respond directly to the risky and perceptually rich situations that the world presents, in order to capture the expert’s style, in order to learn abilities for which there are no rules, and so forth, we must experience with our whole bodies, with the five senses, and not only those that can be easily mediatised by ICT (e.g., sight and hearing). In other words, I completely agree with Hubert Dreyfus when he says that, at the level of expertise, distance apprenticeship is an oxymoron (Dreyfus, 2001).
A Heideggerian View on E-Learning
Future reseArch dIrectIons As I have said earlier, if one wants to learn a motor skill at level “Competent,” one must enter a face-to-face apprenticeship. However, from a theoretical point of view, it could be possible to reach competence in learning a motor skill without having to interact face-to-face with other human beings. Chilean biologists Humberto Maturana and Francisco Varela discovered that our nervous system is a closed system. As a consequence, our nervous system is unable to distinguish from two identical stimulii coming from outside (Maturana & Varela, 1984). More precisely, the nervous system will react identically if our senses are stimulated by another human being (e.g., another fighter or dancer) or by a virtual reality system.13 Therefore, one can imagine learning to dance with a virtual reality system. Even if nowadays this is not the most cost-effective system to learn how to dance (to say the least!), virtual reality seems a promising field of research in order to design ICT-based learning systems to reach competence in a motor skill. As virtual reality could be a future trend in reaching competence when learning a motor skill, videogames can open new possibilities when learning a cognitive skill at this level. Every father who has carefully observed his son playing with a videogame, whether it is on a computer or on other devices like a mobile phone, a Gameboy, or a Play Station, could notice how quickly he decides and take action. If one is not extremely good at pattern recognition, sense-making in confusing environments, and multitasking, one will not succeed in the game world. In this world, one is immersed in a complex, information-rich, dynamic realm where one must sense, infer, decide, and act quickly, always responding at new situations (Brown, 2005). In other words, one must be good at reflection-in-action, hence becoming a master in dealing with breakdowns. Moreover, thanks to the Internet, a new generation of videogames allow for the learning of
social skills. Massive Multiplayer Online Games (MMOG), like World of Warcraft, involves hundreds of thousand kids lined up (Thomas & Brown, 2006). I strongly believe that game-based learning, and in particular especially designed MMOGs, could be an important trend in e-learning innovation.
reFerences Brown, J.S., & Duguid, P. (1991). Organizational learning and communities of practice: Toward a unified view of working, learning and innovation. Organization Science, 2(1). Brown, J.S., & Duguid, P. (1996). Stolen knowledge. In H. McLellen (Ed.), Situated learning perspectives (pp. 47-56). Englewood Cliffs, NJ: Educational Technology Publications. Brown, J.S., & Duguid, P. (2002). The social life of information. Boston, MA: Harvard Business School Press. Brown, J.S. (2005). New learning environments for the 21st century. Paper presented at the Forum for the Future of Higher Education’s 2005 Aspen Symposium. Dreyfus, H.L. (1986). Mind over machine. New York, NY: Free Press. Dreyfus, H.L. (1991). Being-in-the-world. Cambridge, MA: MIT Press. Dreyfus, H.L. (1992). What computers still can’t do: A critique of artificial reason. Cambridge, MA: MIT Press. Dreyfus, H.L. (2001). On the Internet. London: Routledge. Lave, J., & Wenger, E. (1991). Situated learning. Legitimate peripheral participation. Cambridge, UK: Cambridge University Press.
A Heideggerian View on E-Learning
Maturana, H., & Varela, F. (1984). El árbol del conocimiento. Santiago de Chile: Editorial Universitaria. Mintzberg, H. (1988). Formons des managers, non des MBA! Harvard-L’Expansion, nº 51, 84-92.
Thomas, D., & Brown, J.S. (2006). The play of imagination: Extending the literary mind (Working Paper). Retrieved October 17, 2007, from http://www.johnseelybrown.com
Mintzberg, H. (2004). Managers, not MBAs. San Francisco, CA: Berrett Koehler.
Vasquez Bronfman, S. (2005, September). A Heideggerian perspective on reflective practice and its consequences for learning design. Paper presented at the 11th Cambridge International Conference on Open and Distance Learning, Cambridge (UK).
Papert, S. (1993). Mindstorms: Children, computers, and powerful ideas (2nd ed.). New York: Basic Books.
Wenger, E. (1998). Communities of practice: Learning, meaning and identity. Cambridge, UK: Cambridge University Press.
Pedler, M. (1991). Action learning in practice. London: Gower.
Wenger, E., McDermott, R., & Snyder, W.M. (2002). Cultivating communities of practice. Boston, MA: Harvard Business School Press.
Mintzberg, H. (1996). Musings on management. Harvard Business Review, 74(4), 61-67.
Pfeffer, J., & Sutton, R. (2000). The knowing-doing gap. Boston, MA: Harvard Business School Press. Revans, R. (1980). Action learning: New techniques for management. London: Blond & Briggs. Schön, D.A. (1983). The reflective practitioner: How professionals think in action. New York: Basic Books. Schön, D.A. (1987). Educating the reflective practitioner. San Francisco: Jossey Bass. Schön, D.A. (1996). Reflective conversation with materials. In T. Winograd et al. (Eds.), Bringing design to software (pp.171-184). Reading, MA: Addison-Wesley. Schön, D.A. (1997). A la recherche d’une nouvelle épistémologie de la pratique et de ce qu’elle implique pour l’éducation des adultes. In J.M. Barbier (Ed.), Savoirs théoriques et savoirs d’action (pp. 201-222). Paris: Presses Universitaires de France. Spinosa, C., Flores, F., & Dreyfus, H.L. (1997). Disclosing new worlds. Cambridge, MA: MIT Press.
Wikipedia. (2006). Martin Heidegger. Retrieved October 17, 2007, from http://en.wikipedia.org/ wiki/Heidegger Winograd, T., & Flores, F. (1986). Understanding computers and cognition: A new foundation for design. Norwood, NJ: Ablex. Wrathall, M., & Malpas, J. (2000). Heidegger, coping, and cognitive science: Essays in honor of Hubert L. Dreyfus (Vol. 2). Cambridge, MA: MIT Press.
AddItIonAL reAdIng Brown, D., Richards, M., & Barker, J. (2006). Massively multi-player online gaming: Lessons learned from an MMOG short course for high school students. In T. Reeves & S. Yamashita (Eds.), Proceedings of World Conference on E-Learning in Corporate, Government, Healthcare, and Higher Education 2006 (pp. 404-406). Chesapeake, VA: AACE. Burbules, N.C. (2006). Rethinking the virtual. E-Learning, 1(2), 162-183.
A Heideggerian View on E-Learning
Conolly T.M., & Stansfield, M. (2006, June 25-28). Using interactive technologies in teaching an online information systems course. In Proceedings of the 2006 Informing Sciences and IT Education Joint Conference, Salford, UK. De Freitas, S. (2006). Learning in immersive worlds: A review of game-based learning. Paper prepared for the JISC e-Learning Programme. Retrieved October 17, 2007, from http://www. jisc.ac.uk/media/documents/programmes/elearning_innovation/gaming%20report_v3.3.pdf DeWolfe Waddill, D. (2007). Action e-learning: An exploratory case study examining the impact of action learning on the design of managementlevel Web-based instruction. In M.K. McCuddy et al. (Eds.), The challenges of educating people in a challenging world (pp. 475-497). Springer. Dreyfus, H.L. (2002). Intelligence without representation: Merleau Ponty’s critique of mental representations. Phenomenology and the Cognitive Sciences, 1(4). Dreyfus, H.L., & Dreyfus, S.E. (1985). From Socrates to expert systems: The limits and dangers of calculative rationality. In C. Mitcham & A. Huning (Eds.), Philosophy and technology II: Information technology and computers in theory and practice. Reidl. Dreyfus, H.L., & Dreyfus, S.E. (1999). Apprenticeship and expert learning. In K. Nielsen & S. Kvale (Eds.), Apprenticeship, learning from social practices. Denmark: Hans Reitzels Forlag. Ducheneau, N., & Moore, R.J. (2005). More than just “XP”: Learning social skills in massively multiplayer online games. Interactive Technology & Smart Education, 2, 89-100. Duesund, L. (2000). Teaching and learning: An interview with Hubert Dreyfus. Pedagogiske utfordringer, 2. The Norwegian University of Sport and Physical Education. Retrieved October 17, 2007, from http://www.nih.no/kunnskap_om_ idrett/index.html
Foreman, J. (2004, September/October). Gamebased learning: How to delight and instruct in the 21st century. Educause Review, 51-66. Galarneau, L. (2005, June 16-20). Spontaneous communities of learning: A social analysis of learning ecosystems in massively multiplayer online gaming (MMOG) environments. Paper presented at the International DiGRA Conference, Vancouver, British Columbia, Canada. Retrieved October 17, 2007, from http://www.gamesconference.org/digra2005/overview.php Gibbs, P., & Angelides, P. (2004, September). Accreditation of knowledge as being-in-the-world. Journal of Education and Work, 17(3). Graves, M. (1998). Learning in context (Working Paper). Retrieved October 17, 2007, from http:// www.apple.com/education/LTReview/winter98/ context.html Kreisler, H. (2005). Meaning, relevance, and the limits of technology: Conversation with Hubert L. Dreyfus. Retrieved October 17, 2007, from http://globetrotter. Berkeley.edu/people5/Dreyfus/dreyfus-con1.html Nardi, B.A., Ly, S., & Harris, J. (2007). Learning conversations in world of warcraft. In Proceedings of the 40th Hawaii International Conference on System Sciences, Hawaii. Squire, K. (2005). Game-based learning: State of the field. Masie Center. E-Learning Consortium. Retrieved October 17, 2007, from http://www. masie.com/xlearn/Game-Based_Learning.pdf Steinkuehler, C.A. (2004). Learning in massively multiplayer online games. In Proceedings of the 6th International Conference on Learning Sciences, Santa Monica, California (pp. 521-528). Van Manen, M. (1995). On the epistemology of reflective practice. In Teachers and teaching: Theory and practice. Oxford Ltd. 1(1), 33-50.
A Heideggerian View on E-Learning
Wierinck, E. et al. (2005). Effect of augmented visual feedback from a virtual reality simulation system on manual dexterity training. European Journal of Dental Education, 9(1). 8
Yoo, Y.-H., & Bruns, W. (2005). Motor skill learning with force feedback in mixed reality. In Proceedings of the 9th IFAC Symposium on Analysis, Design and Evaluation of Human-Machine Systems, Atlanta, FL.
9
10
endnotes 1
2
3
4
5
6
7
Professional education refers to university education (either undergraduate or postgraduate) of architects, engineers, doctors in medicine, business professions, and so forth (see Schön, 1983). “Continuous and corporate education” refer to all educational activities (either performed in-company or not) that do not lead to a degree. To be rigorous, “information” rather than “knowledge” should be written here (see Brown & Duguid, 2002). By e-learning, we mean here not only ICTbased distance education, but more generally every use of ICT to support or enhance learning. As human beings, we are always thrown in a given situation. A child is born in a given hospital, city, country, and will live with a given family. When we are at work, we are thrown in meetings, conversations with customers, computer programming, architectural design, and so forth. For instance, in order to innovate one does not need to have a theory of innovation (Spinosa et al., 1997). John Seely Brown and Paul Duguid (2002) make the distinction between learn about and learn to be, to which I add learn to do. For instance, when using e-learning technology, one does not think at it. Instead,
11
12
13
when technology is “transparent,” one is completely concentrated on what one is doing with the technology (our relationship with equipment is always purposeful). The IT Department is spread across three different buildings. Hubert Dreyfus calls skillful coping not only the way people deals with daily work situations, but mainly the smooth and unobtrusive responsiveness to those situations (Wrathall & Malpas, 2000). “For the case study method to work, the students must become emotionally involved. So, in a business school case study, the student should not be confronted with objective descriptions, but rather be led to identify with the situation of the senior manager and experience his agonized choices and subsequent joys and disappointments” (Dreyfus, 2001). Breakdowns have also another benefit: they put people in the right mood for learning because it reveals what they are not able to do. In fact, this course on communication has been designed by a company whose members were trained in the applications of Hubert Dreyfus’ ideas, among others. See for instance the Wikipedia article on Virtual Reality (http://en.wikipedia.org/ wiki/Virtual_reality).
Chapter III
Philosophical and Epistemological Basis for Building a Quality Online Training Methodology Antonio Miguel Seoane Pardo Universidad de Salamanca, Spain Francisco José García Peñalvo Universidad de Salamanca, Spain
ABstrAct This chapter outlines the problem of laying the groundwork for building a suitable online training methodology. In the first place, it points out that most e-learning initiatives are developed without a defined method or an appropriate strategy. It then critically analyzes the role of the constructivist model in relation to this problem, affirming that this explanatory framework is not a method and describing the problems to which this confusion gives rise. Finally, it proposes a theoretical and epistemological framework of reference for building this methodology based on Greek paideía. The authors propose that the search for a reference model such as the one developed in ancient Greece will allow us to develop a method based on the importance of a teaching profile “different” from traditional academic roles and which we call “tutor.” It has many similarities to the figures in charge of monitoring learning both in Homeric epic and Classical Greece.
Copyright © 2008, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
Philisophical and Epistemological Basis for Building a Quality Online Training Methodology
IntroductIon the Failure of e-Learning without a method Online training or e-learning is an authentic revolution in its way of conceiving learning experiences compared to how we thought of them until very recently. It would take too long to list all the changes that have taken place in this new educational modality, which have affected technological elements, communication dynamics, social factors, and new teaching and learning roles, as well as the teaching-learning relationship itself, the value of the contents, and the methodology of the process. However, despite the euphoria unleashed by online training in recent years, and the fact that the development of tools, training systems, and digital contents has been and still is extraordinary, we can not hide from the fact that there is a certain skepticism or even disappointment when the level of user satisfaction and the outcomes attained in online training are analyzed, if we limit ourselves exclusively to the learning objectives actually attained. What is important in any educational intervention, whatever its type, electronic, at a distance, or face-to-face, is none other than achieving certain learning objectives: the proof of having taught them does not suffice; we need to be sure that they have actually been acquired. Since e-learning is a type of learning characterized by technological mediation (this is not its only peculiarity, but for the time being we will focus on this aspect) and since what is apparently different with respect to other kinds of training seems to lie in the elements of this mediation, when we analyze the causes of this skepticism (or failure) we usually focus exclusively on the technological factors: the learning environments are not appropriate, the digital contents are not well-structured, and so forth. Consequently, an enormous amount of material and human re-
sources are devoted to perfecting these elements in the hope of improving the learning experience, without our realizing that the solution to the problem lies in another direction. Logically, the evolution of these technological mediation factors will contribute to improve the context, just as we would improve the learning experience if we renewed the blackboards, the lighting, or the equipment of a classroom in a traditional context. However, we all know that this is not the main thing for achieving quality training. And looking back on our own experience, we all remember that we learned nothing, or very little, from the technical or logistic elements in our schools but we did learn a lot with our good teachers and classmates. Thus, technology must be improved but we can not fall into the trap of only blaming the tool for not being able to reach the desired objectives. Technological mediation in e-learning is precisely that, a medium, and in any case it is a mistake of training strategy not to have had suitable resources, or not to have been capable of adapting ourselves to the means available. The tool is, or we should make it be, as neutral as possible. All in all, if we study the brief history of elearning we can already speak of “generations” that have marked its development up until now, and whose evolution allows us to predict (assuming that this is possible) where we are going in the future (Seoane, García, Bosom, Fernández, & Hernández, 2007). After a first generation marked almost exclusively by the development of technological environments and digital contents, we have moved towards a concern, in recent years, for the e-learning “model” and, consequently, to a concern for the development of implementation strategies and the interoperability of online training environments with an institutional model for the university, the public administration, and business firm. Thus the question of a model of efficiency and quality appears. However, we are witnessing a moment in which a strange paradox is occurring: the greater the technological media-
Philisophical and Epistemological Basis for Building a Quality Online Training Methodology
tion and the more we implement our systems and improve the environments and training contents with a view to reducing the intervention of the teaching roles, the worse the learning outcomes. It is becoming necessary to go on then towards what we call the “second-advanced generation” in which the importance of the human factor in online training plays a crucial role, not only from the point of view of planning and strategic design, but also, and especially, as an element present in all the stages of the training itinerary. The redefining and centrality of teaching roles in e-learning is the main characteristic of this generational stage, at which many institutions and training initiatives really concerned about quality currently find themselves. Thus, the cornerstone that will allow us to explain the reason for the disappointment in elearning up until now is the human factor. The great fallacy of technological mediation has consisted of the belief that the mediating role of the classroom teacher would be replaced by technology, when the latter should really be at the service of the teacher, who will still carry on playing the main mediating role in learning. This mistake, as widespread as it is serious, is the consequence of the transmutation of a training paradigm into one of an “informative” nature. In other words, we can say that underlying this matter there is an enormous confusion between information and education (or training). This situation is not at all new and has come up when analyzing the problems of other “classical” training paradigms, but with online training it has been taken to its ultimate consequences, most likely because of the emergence of the so-called “information society” and information and communication technologies. Their names are accurate enough, but they seem to have subliminally taken on educational aspirations. Indeed, a book, a newspaper, the Internet, or audiovisual material can provide us with information, but never education or training. Education is a specifically human activity that consists, among other things,
of the internalization and assumption of specific information with a significant purpose. Thus, as can be seen, education presupposes information, but it is more than that. That is why educational material alone can not “educate.” This can only be done by the subject who becomes educated by internalizing, by becoming aware of the value of the contents, by building a meaningful universe within him or herself, or, what is more common, by the mediation of other human beings, who, either individually (with a teacher) or collectively (with a group of students or in the social context itself), contribute to turning information into an educational experience in the mind of the individual. This dichotomy can be compared to what in philosophical terms Aristotle (and later the Aristotelians, specially Thomas Aquinas) called “active intellect” and “passive intellect” (Aristotle, De anima, 430a 10-25; Thomas Aquinas, Summa Theologica, first part, question 79) or to the cognitive distinction between memory and consciousness. Thus, education is more than information. And if we wish to attain it, we have to go beyond technological mediation and learning objects to speak of human interaction both among students and with teachers, because this is where the success or failure of most educational or training initiatives begins. Hence, it seems that two major questions still remain to be solved (perhaps because they have not been sufficiently well-defined) before we face the main problem: on the one hand, it is necessary to define a suitable interaction model for online learning, taking advantage of the fact that the tools available make possible new modalities of communication up until now impossible (Seoane et al., in press); on the other hand, not only do we have an unsuitable definition of the teaching attributions and profiles in online training, but they are also being drastically reduced or eliminated. They often end up becoming mere dynamizers and stimulators of learning, as if they were the “cheerleaders” of training. Absurd, right? But absolutely true in many cases.
Philisophical and Epistemological Basis for Building a Quality Online Training Methodology
But the main problem which our e-learning initiatives often face is the total lack of a suitable method for their development. When we speak of the “second generation” of e-learning we are referring to a strategic approach of the training model required by the entity implementing it. This strategy determines the “what” and the “for what,” but only the development of an appropriate methodology will make it possible to develop “how” the pre-established objectives will be achieved. But is it really true that no methodology for online training has been designed after all these years? Well, an analysis of a good part of the training initiatives and even the specialized literature certainly seems to show this. On the one hand, if we focus on a purely technocentric model, in which the addressee gains access to knowledge and “interacts” with it without any other mediation, there is no method with educational ends and the most we can affirm is whether there has been (or not) a good sequencing and organization of the information and whether or not the student has been able to respond suitably to some test items that prove that this information has been acquired, but not whether real training has taken place. Thus, no matter how much we theorize over these aspects, we will not be going in the right direction in our quest for a training method. Moreover, if we look at other initiatives based on predominantly vertical human interaction (student-teacher-student), we find that there is no substantial change with respect to certain face-to-face contexts, which leads us to the same problems as in face-to-face teaching without, on the other hand, being able to make good use of the advantages of a completely different interaction and communication model. It would in any case be a similar model to that of tutoring in traditional distance education, which differs considerably from the paradigm we are seeking for e-learning. Finally, if we analyze initiatives and studies on learning communities, a key concept for de-
fining the educational model for many e-learning interventions and about which pages and pages have been written, we will discover that these communities favor a high degree of interaction and communication, but we will not be able to avoid a certain feeling of anarchy and loss of time in most of these collective groups. To use Kantian terms, there are many theses and antitheses, but few syntheses and above all there is still great difficulty in determining who has attained certain training objectives and to what degree. Furthermore, we lack a certain criterion of authority (in the Latin sense of the term auctoritas) which makes it difficult to select the best syntheses of the common task because there is a belief (generally naïve) that in these communities a final synthesis of knowledge per se is produced, when what usually happens is that, when this does occur, each member contributes his/her view of the problem, but neither a conclusion nor a consensus is reached on it. This is so because although e-learning environments “transform the social interaction space, … a deeper understanding of the ‘inside’ of the collaborative learning processes is still missing” (Cecez-Kecmanovic & Webb, 2000). Of course, learning communities, especially when made up of qualified adult individuals, are instruments with high educational capacity thanks to the possibilities of interaction and communication and their potentiality for favoring contexts of critical and active construction of knowledge. However, the problem of learning communities, at least in the shape they have taken in a good part of prior experience, lies in their excessively “democratic” approach. Favoring a cognitive and social presence in these communities is not enough. In order to be able to design, direct, and nurture interaction in a learning community, a strong teaching presence is necessary. This does not have to affect the open and critical nature of these communities; what is more, the key factor for success in these communities will lie in the teacher’s ability (as in face-to-face teaching) to create a suitable climate that will favor the setting
Philisophical and Epistemological Basis for Building a Quality Online Training Methodology
up of a genuine learning community, one that is perfectly monitored and well-constructed (Garrison & Anderson, 2003, 2005). Thus, we have contexts, we have interaction models and, of course, technology, but we lack methods for the development of quality training initiatives. A method is nothing more than a guide or instructions as to the road to follow to reach certain objectives. In this case, the method has to be understood in a three-fold sense: first, as the set of instructions and strategies offered to the teacher in order to achieve the learning objectives; second, analogous rules must exist for the correct acquisition of the contents on the part of the student (who should also have a method); finally, since e-leaning favors social knowledge building and social learning is by far the most significant of all those that exist, a method is needed to regulate social interaction with an educational purpose, especially when we are in a “non-natural” context such as that of virtual learning environments.
ConSTRUCTiViSm aS a GoaL, But not As A method One of the terms most used in relation to e-learning (to the point that its original meaning has become completely lost and it is now used gratuitously) is “constructivism,” as a synonym of prestige, careful methodology, and good practice. This expression can be found in essays on methodological approaches or theories for online training, in the explanation of the instructional design of an initiative, in the conception of a learning object or even (surprisingly) to advertise the virtues of a software tool addressed to online training. The problem is that constructivism is not a method, nor even a theory, but rather an explanatory framework (Coll et al., 2005) which tells us that de facto learning occurs in a social, collective context and is the fruit of construction beyond the solitary consciousness of the individual. Actually,
0
the ideas of Vygotsky (Vygotsky & Cole, 1978), those of Bruner (Bruner, 1997, 1998) and even those of Dewey (1933, 1938) form part of an ideological and philosophical context developed during the 20th Century in opposition to the methodic individualism and transcendental philosophies of consciousness that were developed up to the 19th Century and which had their last great exponent in Hegelian idealism. Philosophical approaches in accordance with this presuppose a new type of rationality that replaces an idealist paradigm with another of dialogical, communicative, and social rationality which we can find in key thinkers of the last century such as Gadamer, Apel, and Ortega y Gasset. Thus, constructivism explains, according to the ideological presuppositions of its time, how knowledge is constructed in the human mind. This does not presuppose the existence of an implicit method, or that this explanatory framework can provide us with this method by itself. In simple terms, thanks to cognitivist and constructivist thinkers, we know that the cognitive process takes place in a certain way, which does not mean that they have told us how to get our students to acquire the competencies and skills we program in a learning initiative. This is the difference between an explanation and a method: knowing what has happened (and even knowing why) and knowing how to make it happen again, adapting it to predetermined learning circumstances. Therefore we may ask: What does constructivism offer us? What is it good for? The thesis here defended postulates that constructivism can be considered as a goal for learning, even as a “table of validation” thanks to which we will be able to verify the solidity of the knowledge acquired by our addressees. At most, it could be a guide or perspective for preparing a training methodology, but in no case must we confuse the end with the means that we intend to use for reaching our objectives. Constructivism is thus not valid as a method, and the need to develop a methodology for online training remains pending.
Philisophical and Epistemological Basis for Building a Quality Online Training Methodology
However, in the name of constructivism, many practices in e-learning have become widespread, practices which, based on the supposed virtues of the training paradigm, entail more than a few difficulties and are to a certain extent responsible for the high failure index of online training initiatives. We now take a look at some of them.
the excuse of a student-Focused model Together with expressions such as “constructivist methodology” we often find a reference to the “student-focused model.” In many cases this statement is correct, but students end up discovering that it means something completely different from what they expected. In general, placing students at the center of learning is usually an excuse to unload the whole weight of learning on them and propose a self-learning itinerary with as little assistance as possible. Indeed, if students are the protagonists, they are solely responsible for carrying out the learning task. This is the meaning of “occupying the center” in many e-learning initiatives. A model in which the student occupies the center of the training scenario, far from being a privilege and a stimulus, in many cases ends up being a drawback and gives rise to results contrary to those desired. To show this graphically, the central position of students means that all the elements revolve around them and none of these elements are a point of reference, but rather they all have the student as a reference. This image, which may seem somewhat strange, is disconcerting for many students who are not used to an autonomous style of learning, to setting their own rhythm of learning, and to adapting to the peculiarities of the environment, because the environment never adapts to them. It is true that this training model adapts perfectly to the peculiarities of self-taught persons with a great ability to turn information into training by themselves. However, most individuals need a figure to act as guide and help them change
the information into training thanks to his or her mediation. In many cases, this mediation occurs “among peers” (how many of us have learned, thanks to our classmates what our teachers had not been able to make us understand?) but we must not renounce a teaching figure who, suitably adapted to the context, can perform this mediation. The students, therefore, do not have to be the center of learning but the goal of this task, since they are the addressees of the training intervention. In any case, the oft-mentioned “center” should be occupied by that element of human mediation that here we call “tutor” and who adapts the training initiative (with all its technological, academic, didactic, and human components) to the peculiarities of each addressee, takes charge of guaranteeing the actual acquisition of the competencies and skills foreseen for the training initiative and is ultimately responsible (often even more so than the student) for attaining the training objectives.
the existence of a community is not enough for social Learning to occur Another of the presumed virtues of many online training initiatives with a constructivist approach is the guarantee of training success based on community working dynamics. Gathering together in one room a hundred splendid musicians will not make this assembly an orchestra, the same as a set of sailors enlisted on the same ship cannot be considered a crew. For there to be a real community (musical, nautical, sport, or learning) we need much more than a set of related individuals in the same space-time or “virtual” context. Indeed, as Gestalt psychologists affirmed, inspired by the old discussion that Aristotle initiated in his Metaphysics (1028a-1041b), the whole is more than the sum of its parts. No one will be surprised if we say that the social context is one of the most efficient and common forms of learning, as is shown in the way we acquire knowledge of our native language—without
Philisophical and Epistemological Basis for Building a Quality Online Training Methodology
the need to enroll in any educational institutionand attain a notable mastery of it. However, when we make a set of persons in a training initiative interact, we have no guarantee that they will form a genuine learning community. Communities of students are artificial societies, and making the “sum of the parts” into a “whole” is frankly something very complex. Thus, obliging students to work in a group does not presuppose that they are going to form a learning community. This is a problem well-known to tutors and experts in virtual group dynamics who, using the same strategies in seemingly analogous groups, often attain completely different learning outcomes, both individually and collectively. Turning a group of students into a learning community is a real art, as is turning a hundred musicians into an orchestra. The former may even be more difficult than the latter, but this is coming from someone who has never directed an orchestra. The dynamics that are set up in a learning community are complex and require detailed study. There are magnificent works on learning communities (Wenger, 1998a, 1998b) but there is no method capable of guaranteeing that we will be able to reproduce or build an efficient community. Nevertheless, we can affirm that opening up debate and promoting team work is not enough to constitute a learning community and to “construct” a social learning context. A group must have good leadership and be solidly structured so that guidelines for behavior can be developed that in the end will turn this sum of the parts into a whole that functions as an authentic community. In other words, the possibilities for success in the building of learning communities online (or face-to-face; there are no significant differences in this respect) increase when we start from a situation that includes teaching roles that regulate communication flows, establish guidelines and rhythms for learning, and foster the active participation of the members. The construction of learning in a community is a task that is shared not only by each and every one of
the students involved, but also includes the tutor or tutors at the head of said community. It is a matter of achieving a dynamic or model that some scholars call socioconstructivist, in which the result of social construction is not the responsibility of the students or the teachers (the model is not focused on the student or the teacher), but rather is the outcome of interaction between learning contents, teaching staff, and students (Barberà, 2006) by means of a design for activities that foment the acquisition of competencies and skills and that have an eminently practical approach that favors this interaction.
tools do not construct The third of the usual practices that can be observed in many initiatives inspired by constructivism is the use of technological tools and methods that are posited as constructivist per se. It is well known that constructivism and especially social constructionism is the theoretical reference model for many developers of software for online learning, especially open source. Possibly the best-known system of this type for course management, Moodle (http://moodle.org), confesses on its main page that its philosophy is “social constructionist pedagogy” based on four underlying concepts: constructivism, constructionism, social constructivism, and connected and separate (Moodle, 2007). The creator of this instrument, Martin Dougiamas, has said that his reference model when designing Moodle was the analysis of learning communities based on constructivism and social constructionism (Dougiamas & Taylor, 2003). However, the use of Moodle or any other elearning tool does not guarantee social construction, nor does it foment the achievement of certain objectives. The intentionality of the person who constructs a tool has nothing to do with the use that users may make of it and the corresponding outcomes. Was Alfred Nobel responsible for the belligerent use of dynamite, a compound origi-
Philisophical and Epistemological Basis for Building a Quality Online Training Methodology
nally intended to prevent the constant accidents in mines owing to the instability of nitroglycerine? Likewise, the tools that we use may be more or less suited to the aims and training strategies of our activities, but in themselves they do not give any guarantee whatsoever of constructivist learning. What is more, it could be said that the type of tool we use is practically irrelevant (as long as it fulfills certain minimum conditions) compared to the importance of a good instructional design, a correct training strategy, and a good human team to head the teaching-learning process. Let us then assume that a quality online training initiative has to have as its goal that the students should achieve significant, active, learning constructed within a social context whenever possible in the midst of a learning community. However, in order to achieve this objective, we have to avoid three major obstacles which, like a tree in front of us, can prevent us from seeing the forest. On the one hand, the affirmation of a student-focused model does not at all guarantee a construction (much less a social construction) of knowledge; on the contrary, it can even hinder it. On the other hand, we often observe a confusion between group work and learning community, or between group and community. Finally, we have been able to show that the use of certain tools does not at all condition the social knowledge building, because this depends on the modalities of interaction that occur in the dynamics of training activities; thus, they have to do with humans, not with machines. In short, learning (in e-learning or in conventional environments) is the product of social interaction which as such has rules, roles, and defined structures. To extract all its potentialities, it must be correctly moderated and led by someone with a professionally well-defined teaching profile, who plays a particularly important role in online training and on whom the success of our initiative largely depends.
The methodology of our online training initiatives must therefore revolve around the central and catalyzing figure of the tutor.
In seArch oF A groundwork For the method contributions from greek Paideía Taking into account the starting supposition of these pages, to wit, the importance of monitoring learning through a specialized professional profile, to which the major share of the training methodology will fall, it is evident that we are not looking at the traditional teaching figure, at least as understood in our current school systems. It is thus a matter of a professional whose main mission is not to emit knowledge but rather to guarantee that it reaches the addressees, in an active, participative, and significant context. In our opinion, a large part of the success or failure of online training initiatives will depend on whether or not we have this type of professional, suitably inserted in a solid and well-constructed context of training planning. The big question now is as follows: Has there ever been in the history of education a professional profile of such characteristics? Do we have any model that can serve as a reference, and from which we can develop the role that corresponds to our quality teacher in e-learning initiatives? Our answer is clearly affirmative. Indeed, in Ancient Greece we can find “teaching” models whose characteristics, despite forming part of a context so different from today’s, which is not even homogeneous, are extraordinarily interesting for the task at hand, which is none other than designing a suitable teaching profile for online training methodology. Briefly, and by way of example, below we give the “professional” profile of these personages that will serve as inspiration for the construction of our online teacher and his or her methodology.
Philisophical and Epistemological Basis for Building a Quality Online Training Methodology
the mentor as teacher of the homeric hero One of the first testimonies of the teacher-disciple relationship and thus of interaction between teaching profiles and pupils in a learning context that we know of in western culture has its source in Greek mythology and the Homeric epic. The epic heroes acquire their greatness both from their ancestry and, what is even more interesting, from the presence and importance of their teachers, who not only educate them and prepare them to face the hazards of the heroic life, but even carry out a constant follow-up of their disciple’s actions, intervening when they are most needed. Achilles was taught by no less than the centaur Cheiron and by his mother the goddess Thetis; the latter intervened even at the moment when the hero doubted whether he should or should not go to the Trojan War, helping him to weigh his decision. Indeed, Achilles had to choose between two ways of living and dying. On the one hand, if he decided to stay he would marry, have children and grow old as a king and after his death his descendants would remember him. On the other hand, if he decided to go to war, he would die young without descendants, but the whole of humanity would admire his deeds forever. Everyone knows the result of his choice. Odysseus, for his part, received the permanent guidance of the goddess Athena, who appeared to him on several occasions to advise him, such as on his return to Ithaca, when she changed him into a beggar and proposed a plan to put an end to the suitors that were harassing Penelope and ruining his property. In these cases we encounter figures that appear in the life of the Homeric heroes, who are undoubtedly of greater rank and importance than their disciples, but who do not outshine the actions of their pupils. Rather, the opposite: they extol them by placing them in circumstances in which they will be able to come out with flying colors, magnified by their bravery and preparation.
There is no room for doubt that the prominence goes not to the teacher but to the disciple, but neither can it be denied that the constant presence of the teacher, the security that it gives the hero to know that someone is watching over him and appears when most needed, even placing him before complex situations from which he must extract new teachings, is a model of training and permanent tutoring that is characteristic of the Greek paideía. However, the most characteristic example from which we can extract greater conclusions is that of the relationship between Mentor and Telemachus in the Odyssey. According to Homer’s Odyssey, when Odysseus left Ithaca and was away fighting in the Trojan War, his son Telemachus was just an infant. So Odysseus entrusted Mentor with the care of Telemachus and the entire royal household until he came back 20 years later. Although Mentor is not a main character in Homer’s epic poem, he represents wisdom, trust, counsel, teaching, protection, challenge, encouragement, and so on. (Anderson & Shannon, 1995; Carruthers, 1993). Mentor’s authority was so important to Telemachus than even the goddess Athena took the figure of Mentor to persuade the hero’s son to search for his father. The role of Mentor instructing Telemachus is not quite clear in Homer’s poem, and this is one of the most interesting questions about the matter. Never mind if Mentor (or Athena) is the real “teacher” of Odysseus’ son. It is strange that Mentor is mentioned just a few times in the Odyssey and we do not know how he “really” instructed Telemachus. The only important thing is that Telemachus achieved enough maturity to know how to face Penelope’s suitors and help his father to complete the final revenge: he became a man with the help of an old person whose mission was to remain in the dark, “tutoring” Telemachus’ steps, not helping him but following his tracks at a certain distance, because no one can drive the fate of a man except himself. In fact, the unde-
Philisophical and Epistemological Basis for Building a Quality Online Training Methodology
fined, secondary but crucial role of Mentor has not changed so much with regard to the excellent “Mentors” of e-learning students nowadays. Etymologically, “mentor” produced “monitor” in Latin. The verb “maneo” (to show, to indicate) comes from the Indo-European *man (to think, to know). So Homer’s character Mentor is an anthropomorphization of this idea: wisdom (Little, 1990), thought, knowledge (and consequently know-how), personified by an old man whose purpose is to transmit these skills. In the figure of old Mentor we find, then, an excellent personification of the role that the online training teaching profile should play. From a supporting role, yielding prominence to the disciple, he nevertheless invites the latter to act, to solve problems and to learn through action. Learning, according to the principles of Greek paideía, was not based on acquiring theoretical knowledge or specific practical skills, but had to be oriented towards achieving areté, that which the Romans subsequently translated as virtus and which, erroneously, through Christianity, reached the West as “virtue.” In Homeric times, areté was related to the values peculiar to heroes, to noble warriors and was a mixture of moral and martial ideals. Later, in the classical age, paideía transformed the meaning of areté, which now acquired a more humanist and political approach. Then, “excellence” (a more correct translation of the Greek term than “virtue”) consisted of the acquisition of all the values that make a man a citizen, a being capable of moving with ease in the polis and actively participating in the life of the city. Thus, for the Greeks, education (understood as an activity oriented towards practice and citizenship, and not as a simple learning of contents) is the key to the evolution of a civilization, and linked from its origins to the heroic epic until its splendor in Athenian democracy, it appears as the motor behind Greek culture. Such was the importance of education (of this type of education) in Ancient Greece (Jaeger, 1945).
the education of man as a citizen: the sophists and socrates Towards the second half of the 5th Century B.C. and especially in the last quarter, a real revolution occurred in the way education was conceived of in Greece, to be precise, in Athens. The economic, social, and political changes that occurred in the city favored the appearance of new social needs and a fairly widespread demand for education far above what had until then been received in the family sphere, which only reached a certain level in the higher social strata. This growing demand favored the arrival in Athens of the Sophists, who unleashed a whole revolution in the way of conceiving education and, of course, aroused great controversy which, even now, has still not been analyzed with sufficient neutrality. Here it is not our intention to study what the arrival of the Sophists in Athens meant for education. There are several essays (in general fairly critical of the work of these thinkers) which can be referred to for a more detailed analysis, ranging from the more generic ones by William K.C. Guthrie (Guthrie, 1971), Mario Untersteiner (Untersteiner, 1954), and Jacqueline de Romilly (Romilly, 1992) to those that deal with specific aspects such as their role in Greek Rhetoric (Kennedy, 1963). On the other hand, it is our intention to call attention to a conception of education in which both the Sophists and Socrates coincide, and which has to do with the active social and political nature of education. We will also deal with some of the differences that may be interesting for our purpose. The same as occurred in Homeric times, the main purpose of education for the Sophists and Socrates was none other than attaining excellence, areté. However, although still maintaining a certain competitive view of excellence (i.e., an approach according to which areté is shown in superiority over other men owing to its origin in the noble and warrior class, as we have seen in the previous section), the meaning of the term
Philisophical and Epistemological Basis for Building a Quality Online Training Methodology
underwent a significant variation. In this age areté was linked to social and political success and, therefore, the main objective of teaching was none other than to form good citizens, aware, as the Greeks of that age were, of the importance of social and political interaction. Thus, learning was not something erudite and private, but had to have a social and public usefulness; in a certain sense, moreover, it was an emancipating task because it guaranteed success and social advancement and what is perhaps more important, the usefulness of learning was immediately perceived in its application to the social context. What Socrates and the Sophists did disagree on was the possibility of being able to teach areté. According to Socrates it was more a quality of the soul that one did or did not have and which, at most, the “teacher” could help to find inside the disciple through the Socratic dialogical method known as mayeutics. The Sophists, however, considered that it was possible to teach, in an orderly and structured way, everything required to be an excellent citizen; such teaching, of course, included, among other things, rhetoric, because one of the keys to social success in a civilization such as the Greek one entailed admiration and respect for those with a beautiful and persuasive diction, those who today we would call “charismatic.” This, of course, could lead us into a debate as to whether charisma can be taught or not, and so we would return to the polemics between the Sophists and Socrates, but let us leave this question for the moment. Protagoras, according to Plato’s dialogue of the same name, used the myth of Prometheus to show us that all humans have political virtue by order of Zeus himself, who even ordered that all men should cultivate it and practice it under penalty of being exiled from the city (Plato, Protagoras, 320d-322d). Without going into whether political virtue can be taught or not, the important thing is that education is defined as an activity oriented towards the social sphere and above all to the
interaction of citizens in a political context in which the command of language and rhetoric plays a major role. The teaching-learning relationship is an eminently linguistic activity. As regards Socrates and his particular method of teaching, there are some differential elements that we would like to call attention to (leaving aside the polemics with the Sophists for the moment). Socratic mayeutics is a method based on dialogue, on the art of questioning the disciple so that the latter will be able to find his own answers. Hence, according to Plato’s old teacher, the teacher does not really teach the disciple anything but merely helps him to find for himself the answers which, really, were already inside him. What is really interesting in this methodology is that the student is the one who answers the questions and solves the problems. The teacher’s method consists of knowing how to ask and how to encourage the disciple to look for the answers. Really, he or she is more a stimulus and a guide than an open book in which to find the solution to problems. Even if this is true (and probably it is), the virtue of the teacher consists of making the student believe that she has found for himself the answer to the questions posed. It is a methodology that gives prominence to the student without the teacher disappearing; the latter is always there, ready to orient and advise. Thus, the Socratic method can be defined as dialogical, process-oriented (we understand learning as a process), and proactive. These characteristics are undoubtedly major elements for an online training methodology on which to construct the professional profile of our e-learning teacher. Furthermore, sophistry has revealed that education has an eminently social nature, and that it is precisely in this context where learning gains meaning, beyond mere erudition without specific usefulness. These elements are equally important when constructing an appropriate method for our new training.
Philisophical and Epistemological Basis for Building a Quality Online Training Methodology
concLusIon What sense is there in posing a reflection of the concept of paideía in Ancient Greece in order to develop a methodology suited to online training? As has been seen at the beginning of this chapter, most e-learning initiatives are set in motion without having a clearly defined method or a strategy suited to the peculiarities of this type of training. Moreover, there seems to be a more or less widespread trend to accept constructivism as an explanatory framework or theoretical presupposition. However, constructivism is a cognitive theory rather than a method and perhaps this confusion lies at the bottom of many serious errors related to training paradigms for e-learning. Thus, if constructivism indicates to us a desideratum, a goal, but is not a method in itself, the need remains to set up a path on which to trace the route of learning in an interaction framework as peculiar as the one corresponding to online training. In short, all theoretical reflection on this type of training revolves around what should be done, but there is very little effective orientation to indicate how to achieve what we are supposed to do. After analyzing the different conceptions of education throughout history, we feel that the Greek paideía model is perfectly suited both to the presuppositions of the commonly accepted theoretical framework and to a more realistic (and in a certain sense, “classical”) position, according to which a teaching profile is necessary in order to guarantee the success of a training initiative. The model from Ancient Greece shares with us the idea that training is a task that falls to the subject being trained, but which is not achieved alone and without the presence of someone who, although remaining in the shadows, will always appear when needed and will be capable of showing us the road to knowledge. This knowledge, however, is not understood as a simple acquisition of contents but rather will be developed in capabilities, competencies, and skills which only
make sense if put into practice and therefore are learned along the way. This action, which is the result of knowledge, is revealed in a social context, a context in which new knowledge is produced as the result of the action and interaction of the subjects. Knowledge is, then, the fruit of a social environment. Finally, dialogue and language are the basic elements in the quest for learning, since this is no more than a continual process of questions and answers, answers that lead to new questions… The purpose of these pages was not, then, to develop a method for online training based on the activity of the tutor as a catalyst in the teachinglearning relationship, as has been done in previous studies (Seoane Pardo & García Peñalvo, 2006; Seoane Pardo, García Peñalvo, Bosom Nieto, Fernández Recio, & Hernández Tovar, 2006). On this occasion, on the contrary, we opted to illustrate the groundwork on which to build this method, starting from a model with a long tradition and which, by the way, is to be found in the very foundations of western civilization.
Future reseArch dIrectIons The philosophical and epistemological reflections contained in this chapter are part of a more ambitious research concerning a new methodology for online training, especially a methodology for training “online teachers” or “tutors” (also known as “facilitators” or “e-facilitators” in other contexts). These considerations, among with the main hypotheses of that methodology, are being tested in several initiatives developed by the University of Salamanca that are being addressed to different kind of users with completely different learning contexts and with remarkable success in all the various scenarios where this methodology has been proved. Most of the theories and even case studies related to methodology and didactics in e-learn-
Philisophical and Epistemological Basis for Building a Quality Online Training Methodology
ing contexts analyze the learning contexts and how students learn in these initiatives, simply explaining the scenario or offering a sum of suggestions to improve the learning experience. But still persists the need of a real teaching model for e-learning activities, because teaching roles are still fundamental to let trainees achieve the desired goals, skills, and competences required for any learning activity. Thus the main challenge for the near future is the definition of a complete methodology for training teachers specifically adapted to online contexts, among with the clear definition of their skills and competences. These studies, actually being proved in real learning contexts, will be completed with several “user manuals” for online teachers, students, learning content designers, and instructional designers, all of them adjusted to a rigorous quality framework that must preside the whole process of every learning activity that aspire to deserve the qualifying of excellence.
Bruner, J. (1997). La educación, puerta de la cultura. Madrid, Spain: Visor.
AcknowLedgment
Dougiamas, M., & Taylor, P.C. (2003). Moodle: Using learning communities to create an open source course management system. Paper presented at the Proceedings of the EDMEDIA 2003 Conference, Honolulu, Hawaii.
This work has been partly financed by the Ministry of Education and Science (Spain), KEOPS Project (TSI2005-00960).
reFerences Anderson, E.M., & Shannon, A.L. (1995). Towards a conceptualization of mentoring. In T. Kerry & A.S. Mayes (Eds.), Issues in mentoring. London: A.S. Routledge. Barberà, E. (2006). Los fundamentos teóricos de la tutoría presencial y en línea: Una perspectiva socio-constructivista. In J.A. Jerónimo Montes & E. Aguilar Rodríguez (Eds.), Educación en red y tutoría en línea (pp. 161-180). Mexico: UNAM FES-Z.
Bruner, J. (1998). Desarrollo cognitivo y educación. Madrid, Spain: Morata. Carruthers, J. (1993). The principles and practices of mentoring. In B.J. Caldwell & E.M.A. Carter (Eds.), The return of the mentor: Strategies for workplace learning. London: Falmer Press. Cecez-Kecmanovic, D., & Webb, C. (2000). Towards a communicative model of collaborative Web-mediatic learning. Australian Journal of Educational Technology, 16(1), 73-85. Coll, C., Martín, E., Mauri, T., Miras, M., Onrubia, J., Solé, I., et al. (2005). El constructivismo en el aula, Vol. 111 (15th ed.). Barcelona: Graó. Dewey, J. (1933). How we think. Boston, MA: Heath. Dewey, J. (1938). Experience and education. New York: Macmillan.
Garrison, D.R., & Anderson, T. (2003). E-learning in the 21st century: A framework for research and practice. London, New York: RoutledgeFalmer. Guthrie, W.K.C. (1971). The sophists. London: Cambridge University Press. Jaeger, W. (1945). Paideia: The ideals of Greek culture (G. Highet, Trans.). New York: Oxford University Press. Kennedy, G.A. (1963). The art of persuasion in Greece. Princeton, NJ: Princeton University Press. Little, J.W. (1990). The mentor phenomenon and the social organisation of teaching. In C. B. Court-
Philisophical and Epistemological Basis for Building a Quality Online Training Methodology
ney (Ed.), Review of Research in Education, 16, 297-351. Washington, DC: American Educational Research Association.
Wenger, E. (1998b). Communities of practice. Learning, meaning and identity. Cambridge: Cambridge University Press.
Moodle. (2007). Philosophy. Retrieved October 17, 2007, from http://docs.moodle.org/en/Philosophy
AddItIonAL reAdIngs
Romilly, J.d. (1992). The great sophists in Periclean Athens. Oxford, UK, New York: Clarendon Press, Oxford University Press. Seoane, A.M., García, F.J., Bosom, Á., Fernández, E., & Hernández, M. J. (2007). Online tutoring methodology approach. International Journal of Continuing Engineering Education and Life-Long Learning (IJCEELL), 17(6), 479-492. Seoane Pardo, A.M., & García Peñalvo, F.J. (2006). Determining quality for online Activities. Methodology and training of online tutors as a challenge for achieving the excellence. WSEAS Transactions on Advances in Engineering Education, 3(9), 823-830. Seoane Pardo, A.M., García Peñalvo, F.J., Bosom Nieto, Á., Fernández Recio, E., & Hernández Tovar, M.J. (2006). Tutoring online as quality guarantee on e-learning-based lifelong learning. Definition, modalities, methodology, competences and skills (CEUR Workshop Proceedings). Virtual Campus 2006. Selected and Extended Papers, 186, 41-55. Untersteiner, M. (1954). The sophists. New York: Philosophical Library. Vygotsky, L.S., & Cole, M. (1978). Mind in society: The development of higher psychological processes. Cambridge, MA: Harvard University Press. Wenger, E. (1998a). Communities of practice. Learning as a social system. Retrieved October 17, 2007, from http://www.co-i-l.com/coil/knowledge-garden/cop/lss.shtml
Anagnostopoulos, D., Basmadjian, K.G., & McCrory, R.S. (2005). The decentered teacher and the construction of social space in the virtual classroom. Teachers College Record, 107(8), 1699-1729. Ardizzone, P., & Rivoltella, P.C. (2003). Didattiche per l’elearning. Metodi e strumenti per l’innovazione dell’insegnamento universitario. Roma: Carocci editore. Bereiter, C., Scardamalia, M., Cassells, C., & Hewitt, J. (1997). Postmodernism, knowledge building, and elementary science [Special Issue: Science]. Elementary School Journal, 97(4), 329-340. Jonassen, D.H., Carr, C., & Yueh, H.-P. (1998). Computers as mindtools for engaging learners in critical thinking. TechTrends, 43(2), 24-32. Maldonado, T. (1994). Lo real y lo virtual. Barcelona: Gedisa. Marcelo, C., Puente, D., Ballesteros, M.A., & Palazón, A. (2002). E-learning-teleformación. Diseño, Desarrollo y Evaluación de la Formación a través de Internet. Barcelona: Gestión 2000. Ruipérez, G. (2003). Educación Virtual y eLearning (1ª ed.). Madrid: Fundación Auna. Scardamalia, M. (2002). Collective cognitive responsibility for the advancement of knowledge. In B. Smith (Ed.), Liberal education in a knowledge society (pp. 67-98). Chicago: Open Court. Scardamalia, M., & Bereiter, C. (2003). Knowledge building environments: Extending the limits of the possible in education and knowledge work.
Philisophical and Epistemological Basis for Building a Quality Online Training Methodology
In A. DiStefano, K.E. Rudestam & R. Silverman (Eds.), Encyclopedia of distributed learning. Thousand Oaks, CA: Sage Publications. Scardamalia, M., & Bereiter, C. (2003). Knowledge building. In J. W. Guthrie (Ed.), Encyclopedia
0
of Education (2ª ed.) (pp. 1370-1373). New York: Macmillan Reference. Vygotsky, L.S., & Kozulin, A. (1986). Thought and language (Rev. ed.). Cambridge, MA: MIT Press.
Chapter IV
E-Mentoring:
An Extended Practice, An Emerging Discipline Angélica Rísquez University of Limerick, Ireland
ABstrAct This chapter integrates existing literature and developments on electronic mentoring to build a constructive view of this modality of mentoring as a qualitatively different concept from its traditional face-to-face version. The concept of e-mentoring is introduced by looking first into the evasive notion of mentoring. Next, some salient e-mentoring experiences are identified. The chapter goes on to note the differences between electronic and face-to-face mentoring, and how the relationship between mentor and mentee is modified by technology in unique and definitive ways. Readers are also presented with a collection of best practices on design, implementation, and evaluation of e-mentoring programs. Finally, some practice and research trends are proposed. In conclusion, the author draws an elemental distinction between both modalities of mentoring, which defines e-mentoring as more than the defective alternative to face-to-face contact.
IntroductIon The technology revolution has changed the way we live in our world, including what we understand about mentoring and how it happens. Information and communication technologies (ICTs) have been made central given their potential for democratization of the access to knowledge, their incorporation to professional competences, and the improvement
of learning possibilities (Gisbert, 2004). During the last two decades, ICTs have offered new and exciting opportunities for transcending the physical and psychological distance between people. Accounts of the potential of ICT for mentoring relationships started appearing in the literature in the late 1980s and early 1990s (Moore, 1991), and have extended to become a phenomenon emerging on a world wide scale. The first online
Copyright © 2008, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
E-Mentoring
version of the original contribution to UNESCO’s World Communication and Information Report (Blurton, 1999) notes the potential of ICT to enable mentoring programs to provide guidance to individuals by well-established members of a particular community. Blurton (1999) notes that “such virtual collaborations between individuals are an effective way for senior member of a community to teach, inspire and support newcomers” (p.12). A simple Web search using the terms “electronic mentoring,” “e-mentoring,” “online mentoring,” or “telementoring” identifies a large number of programs initiated by educational institutions, corporations, and communities around the globe, in which support to individuals is facilitated by the use of ICT. This chapter presents to the reader the developments of the last decade in computer mediated mentoring, starting first with a consideration of the general concept of mentoring.
BAckground a multifaceted and Elusive Concept The term “mentoring” was coined based on Homer’s Odyssey, where the young Telemachus was assigned Mentor as his companion and advisor during the long absence of his father. Since the late 1970s, the term was adopted to promote the value of institution or organization-based relationships to an individual’s personal and professional development. Much emphasis is placed on empathy and trust (Eby, 1997); and most authors agree on the idea that the benefits of mentoring tend to emerge only over a relatively long period of time (Rhodes, 2002). Mentoring is a growing practice that has been extensively documented in Anglo-Saxon literature as a means to facilitate transitional adjustment and personal or professional development (Allen, McManus, & Russell, 1999; Eby, 1997; Gray & Gray, 1990; Kram, 1985; McMahon, Limerick, & Gillies, 2002; Smith &
Ingersoll, 2004). Miller (2004) refers to “transition mentoring” to describe those programs that target individuals during times of transition at any moment in life, for example, educational and career transitions. In transition mentoring, a paired relationship is established between a more senior individual or mentor and a lesser experienced individual or mentee, in order to develop competences and orientations towards survival that the newcomer might otherwise only have acquired slowly and with at least some difficulty. Literature also suggests that effective mentoring relationships should be trust based and power free (P. B. Single & R. M. Single, 2005a). This is often refereed to as “the value of impartiality,” the benefit associated with being mentored by someone who has no a vested interest in your choices or ulterior motives for mentoring. Basically it is useful to find a mentor who doesn’t have an interest in your performance, and with whom the newcomer can share common experiences. Peer to peer relationships offer useful orientations to a mentoring system, involving a degree of social responsibility to the community in ways that attempt to confront and reverse an ever-increasing individualistic, competitive approach to career, education, and life development (Allen et al., 1999; McLean, 2004; O’Regan, 2006). In addition to these benefits, peer mentors may be in a better position to share information, offer credible advice, listen to the mentees’ concerns, and serve as a role model than traditional mentors. Allen et al. (1999) demonstrated the effectiveness of psychosocial and career-related peer mentoring showing that there are different dimensions of socialization of newcomers that peers can facilitate (politics, performance, and establishment of relationships with organizational members). The authors underscore the valuable role that more experienced peers can serve in enhancing socialization (in abstract). Arguably, peer mentors may have training and support needs that program organizers must take careful consideration of.
E-Mentoring
However, what is understood for “mentoring” and it manifestations is very diverse. The idea of a strongly interpersonal relationship which provides a “safe place” for the newcomer to address his or her development needs associates mentoring to the area of counseling, although there are important distinctions between the two (Stokes, Garrett-Harris, & Hunt, 2003). Mentoring is also often confused and mixed with other concepts, like tutoring, coaching, and advising. It is difficult to draw a distinction between these and the term used very much depends on local and national contexts and traditions. O’Neill and Harris (2005) draw a pretty clear distinction between “tutoring” and “mentoring” as follows: Tutoring is often confused with mentoring because in involves an ongoing relationship between a student [and by extension a new employee] and a more knowledgeable person, but there are important differences. (…) In tutoring, the objective is that the student [employee] masters a well-defined domain. The expert assigns the student [employee] a problem (…), then evaluates the student’s [employee’s] performance (...) Throughout, the tutors is typically in control of which problems the student [employee’s] addresses. Mentoring is quite different in that interactions usually evolve around problems that the junior party brings to the table. (p.113)
There can be components of mentoring in tutoring, and of tutoring in mentoring, but the primary goals of the two programs and different. Most definitions distinguish mentoring from a situation where the mentor provides solutions to the mentee, and emphasize instead the reciprocal, complex, and multilayered nature of the relationship. To condense this elusive concept, it is useful to remind the reader on what mentoring is and is not, as summarized in Table 1.
a European Perspective The popularity of mentoring, long accepted in Anglo-Saxon academic and organizational environments, is strongly rising in the European context as a means for guidance, support, and socialization. A recent resolution by the European Council, aimed to establish the policies and practices in the field of guidance through life, includes mentoring in the main definition of “guidance” (EC, 2004b, p. 2). The document stresses that the role of guidance and mentoring is to provide significant support to individuals during their transition between levels and sectors of education, training systems, and working life (2004b, p. 3). The document also strongly recommends that the beneficiaries of guidance should be at the centre of the process both in terms of design and delivery. O’Regan (2006) highlights mentoring is receiving a higher profile than ever
Table 1. What is mentoring? Mentoring is…
Mentoring is not…
An enhancement of other forms of social, emotional, psychological, and intellectual support
An isolated solution to problems
A dynamic process that engages both mentee and mentor in the process of self-learning, action, and reflection
Something that is done TO an individual
Transformational, organic, complex, multidimensional, and somewhat unpredictable. Requires mutual engagement
Passive or mechanistic
E-Mentoring
before. The author quotes Gränzer’s presentation at ENCYMO (the Mentoring in Europe Conference which took place in Liverpool in 2005) on the discussions currently taking in the European Commission relating to the growth and expansion of mentoring as a key element to the support of individuals across multiple contexts. The UK has a significant lead on other European countries, with several millions of pounds invested from governmental funding to the National Mentoring Network through the Aimhigher program.
iSSUES, ConTRoVERSiES, anD ProBLems e-mentoring Time and space constraints often create an obstacle that prohibits mentors and mentees meeting as frequently as they should (if at all), an outcome that has undermined traditional face-to-face mentoring relationships more than any other factor, according to Noe (1998). As a result, organizations and institutions across the globe have embraced the access opportunity that computer mediated communication promises for mentoring. E-mentoring is defined by Single and Muller (1999) as a naturally occurring or paired relationship primarily using electronic communication that is established between a more senior individual (“mentor”) and a lesser experienced individual (“protégé” or “mentee”), intended to develop and grow the skills, knowledge, confidence, and cultural understanding of mentee to help the mentee to succeed. P. B. Single and R. M. Single (2005b, p. 10) elaborate further on the definition to structured e-mentoring programs, informed by the work of the face-to-face structured mentoring field: ...occurs within a formalized program environment, which provides training and coaching to
increase the likelihood of engagement in the e-mentoring process, and relies on program evaluation to identify improvements for future programs and to determine the impact on the participants. (p.10) Ensher, Heun, and Blanchard (2003) categorize e-mentoring according to the amount of electronic communication that takes place within the relationship. At one end of the continuum there are full e-mentoring relationships (computer based communication only). At the other extreme are face to face mentoring with ICT support, and somewhere in the middle blended mentoring takes place as a combination of face-to-face and online mentoring. Hamilton and Scandura (2003) specify further and state that, to be called e-mentoring, 75% or more of the mentorship must take place through electronic means. A review of the literature focusing on support approaches in electronic collaborative learning environments results in a variety of concepts (e.g., e-tutoring, online mentoring, e-coaching, e-moderating) being used to address the roles, tasks, and responsibilities of online facilitators (De Smet, Van Keer, & Valcke, forthcoming). Much of the above discussion on the differences between mentoring and tutoring would apply to their electronic versions, and a case can be made that what is termed “e-mentoring” is often difficult to distinguish from e-moderating, e-couching, or e-counseling. Moreover, the technical and interpersonal competences required from e-mentors overlap with those of e-moderators and e-tutors, and much of the literature dealing with best training practices in e-mentoring emanate from best practices and research findings in other areas related to computer mediated communication (CMC) (Kasprisin, Single, Single, & Muller, 2003; O’Neill & Harris, 2000). E-mentoring systems have been introduced in many contexts with a wide variety of purposes: facilitating expatriate or newcomers’ adjustment (Beitler & Frady, 2001; Dewart, Drees, Hixen-
E-Mentoring
baugh, & Williams, 2004), career development (Tesone & Gibson, 2001; Wadia-Fascetti & Leventman, 2000), support to entrepreneurs and SMEs (Perren, 2003; Stokes, 2001), curriculumbased learning (O’Neill & Harris, 2000), and higher participation in academia by minority groups (Headlam-Wells, Gosland, & Craig, 2005; McMahon et al., 2002; Single & Muller, 2001). In Tables 2-11, MentorNet, an outstanding example of e-mentoring, is presented.
mentornet: A great success of e-mentoring By 2003, MentorNet had served more than 2,800 mentees. Nowadays, the organization has around 20,000 members and has coordinated more than 9,000 e-mentoring relationships. Importantly, the evaluation of results of nearly 10 years of experience are greatly helping to canvass the potential and challenges of e-mentoring (Single & Muller, 2000; Single, Muller, Cunningham, & Single, 2000; Single & Single, 2005a, 2005b). Many other e-mentoring projects have been inspired on the work by MentorNet with the common objective of enhancing female presence on target areas and professions, for example, the EU funded initia-
tive Empathy-Edge in the UK (Headlam-Wells et al., 2005).
a Qualitatively Different Experience E-mentoring programs do have some fundamental similarities with their face-to-face counterparts. The starting point is essentially the same: a one to one liaison between two individuals based on a mutual commitment towards developing the skills of the less experienced of them towards some broad organizational or institutional objective (Conway, 1998). In order to function effectively, both electronic and face-to-face systems must be planned, implemented, and monitored properly with a coordination system that supports, but is somewhat independent of the participants. Both are also affected by wider organizational and personal factors including culture and norms, management support, and degree of top-level commitment to the success and longevity of the program. However, e-mentoring and face-to-face mentoring are also different in many ways. A literature review of the opportunities and challenges of computer mediated mentoring as opposed to it traditional face-to-face version has highlighted
Figure 1. Homepage of MentorNet (www.mentornet.net)
E-Mentoring
Figure 2. MentorNet e-Forum discussion groups
Figure 3. Mentor profile (to be filled before participating in one-to-one mentoring)
E-Mentoring
Table 2. Contact. Differences between face-to-face and e-mentoring (a literature review)
FACE-TO-FACE • Rigid, dependent on space and time
ELECTRONIC • Flexible, independent of space and time
Table 3. Timing. Differences between face-to-face and e-mentoring (a literature review) FACE-TO-FACE
ELECTRONIC
• Immediate. Pressure on immediately responding •
• Asynchronous tools (discussion forums, e-mail): Delayed, without the pressure of immediately responding. It may be not a timely process if responses are not quick • Asynchronous tools (chat): Pressure on immediate response
Table 4. Implications of the communication channel. Differences between face-to-face and e-mentoring FACE-TO-FACE
ELECTRONIC
• Rich on nonverbal cues, wealth of emotional information • Participants can learn from the other person’s immediate reactions • For some individuals, face-to-face interaction is seen as warmer and richer. Others find it difficult and exposed • Misunderstandings can be clarified as they appear if participants have the required social skills • First impressions may play a greater role
• Nonverbal cues are missing; alternative expression of emotions is required • Not needing to take account of another person’s immediate reactions (“self-absorption”) may facilitate self-awareness and provision of honest feedback • For some individuals, the communication style can be safer and less intimidating. Others perceive it as a cold medium • Miscommunication may happen. In extreme cases, CMC can turn hostile as inhibitions are lowered • Less information is exchanged so relationships develop slowly, but it allows for greater privacy • Hyper-relationships may happen (participants form a better opinion of the other than they would if they were physically interacting)
Table 5. Skills required. Differences between face-to-face and e-mentoring (a literature review) FACE-TO-FACE
ELECTRONIC
• Conventional social skills are required
• Social skills, computer literacy, good written communication, and netiquette are required • More frequent and explicit purpose-setting, progress-reporting, and problem-solving communications may be necessary
E-Mentoring
Table 6. Role of social differences. Differences between face-to-face and e-mentoring (a literature review) FACE-TO-FACE
ELECTRONIC
• Status differences play a greater role
• Status differences are attenuated
Table 7. Pairing and scalability. Differences between face-to-face and e-mentoring (a literature review) FACE-TO-FACE
ELECTRONIC
• Space/time restrictions may impose limitations on the pairing of mentors, taking precedence over expertise • Physical proximity and personal schedules may pose high barriers to entry
• Space/time flexibility is likely to provide greater choice in the pairing of mentors and protégés and extend opportunities to participate • The ease with which virtual relationships can be started and ended can weaken commitment. Also, the nature of the communication can promote minimal contact between participants, shorter programs, inadequate planning, mentor training, and follow-up • Mentors often find it difficult to find out about their mentees’ needs and frustrations, and are reliant on their mentees to express them
Table 8. Records of the interaction. Differences between face-to-face and e-mentoring (a literature review) FACE-TO-FACE
ELECTRONIC
• There is usually no record. Information is collected sometimes using questionnaires or rubrics and is retrospective
• The interaction can be recorded automatically and just in time. Mentors and protégés tend to find these records helpful
Table 9. Monitoring and evaluation. Differences between face-to-face and e-mentoring (a literature review)
FACE-TO-FACE
ELECTRONIC
• Use of secondary sources (participants’ reports and coordinator’s notes)
• Primary source of information (electronic records) allow for content analysis, analysis of participation patterns, lurking, and so forth
E-Mentoring
Table 10. Ethical implications. Differences between face-to-face and e-mentoring (a literature review) FACE-TO-FACE
ELECTRONIC
• The relationship is usually confidential although ethical issues must be dealt with (like participants’ selection)
• Electronic records may involve additional confidentiality and ethical issues, which may also impact the communication
Table 11. Cost and access. Differences between face-to-face and e-mentoring (a literature review) FACE-TO-FACE
ELECTRONIC
• Depending on geographic and time circumstances, it can be a cost effective solution or a cost intensive one • There may be other associated costs, like activities during meetings
• Depending on participants’ having easy access to computers and Internet, e-mentoring is a cost effective option or it may appear a digital divide (lesser e-mentoring opportunities for those who can not access the technology)
differences (Bierema & Merriam, 2002; Ensher et al., 2003; Harrington, 1999; McCormick & Leonard, 1996; O’Neill & Harris, 2000, 2005; O’Neill, Harris, Cravens, & Neils, 2002; Single & Muller, 1999) summarized in Tables 12-17, according to each issue considered. Of course, both varieties of mentoring are not necessarily mutually exclusive, and they can complement each other if the circumstances make a blended approach possible. However, the issues raised above indicate that both types of mentoring represent quite different ways of striving for a common goal. Bierema and Merriam (2002) share the view that e-mentoring is “qualitatively different than traditional face-to-face mentoring” and that “the virtual medium provides a context and exchange that may not be possible to replicate in face-to-face mentoring relationships” (2002, p. 219). Therefore, in Harris’ words (O’Neill et al., 2002), the important question is not whether e-mentoring is better or worse than face-to-face mentoring, but rather what e-mentoring can bring “for long in-depth, productive, mutually beneficial interactions when the same can’t happen face-to-face.” P. B. Single and R. M. Single (2005b, p. 14) elaborate in this direction and note that the primary benefit of e-mentoring is in the
value of connections between organizations. For them, e-mentoring facilitates the “strength of weak ties,” since electronic communications span render geographical distances irrelevant and provide mentoring opportunities to wider and more diverse groups of people. Given the potential drawbacks that e-mentoring may involve as noted in the tables above, some authors (O’Neill et al., 2002) argue that deeply personal, long term relationships are likely not to work so well online. However, there are also equally important forms of mentoring that can provide people guidance and advice as they enter into and move through unfamiliar organizations, communities and stages in life. E-mentoring is likely to find its niche among these modalities of mentoring, focused on more shorter-term and professional or academic objectives.
Best PrActIce Coming from this view of e-mentoring as a discipline and practice “in its own right,” a 306 degree review of effective practice along the life span of a mentoring program is presented next. This review combines suggestions for best
E-Mentoring
Table 12. Statement of purpose and long-range plan. Best practice on e-mentoring DESIGN (a literature review) RECOMMENDATIONS
AUTHOR(S)
•
State what ultimate purpose the program is design for: career development, academic support, socialization, and so forth.
•
Who, what, where, when, why,and how activities will be performed
•
Realistic, attainable, and measurable goals, objectives, and timelines
•
Decide on a typology of mentoring (senior to junior or peer to peer, individual or group based, etc.)
Miller, 2002
•
Carry out a pilot in small scale and introduce change gradually
Ross, 2004
MENTOR, 2001
Table 13. Relevant populations and stakeholders. Best practice on e-mentoring DESIGN (a literature review) RECOMMENDATIONS •
Assessment of potential mentee’s needs and pool of mentors
•
Adherence to general principles of volunteerism
AUTHOR(S) MENTOR, 2001
Table 14. Contextualization. Best practice on e-mentoring DESIGN (a literature review) RECOMMENDATIONS
AUTHOR(S)
•
Research local and national e-mentoring schemes
Ross, 2004
•
Assessment of organizations’ readiness, capacity, and will to create and sustain a high-quality e-mentoring programs, collecting input from originators, staff, potential volunteers, and potential mentees
MENTOR, 2001
•
Build upon the knowledge obtained in face-to-face mentoring experiences in the institution/organization
•
Sustain involvement of staff with funded time, meaning it is a designated time within their day (instead of something extra in addition to their regular duties)
•
Build relationships carefully with all stakeholders
•
Adjust to the institution/organization’s periods of intensive work, holidays, and so forth
O’Neill, et al. (2002)
0
Ross, 2004
E-Mentoring
Table 15. Technology strategy. Best practice on e-mentoring DESIGN (a literature review) RECOMMENDATIONS
AUTHOR(S)
•
Carry out a through IT audit of all involved
Ross, 2004
•
Choose a communication system: a. appropriate to goals of the program b. relevant to participants’ context c. safe and reliable d. affordable
•
Policies regarding privacy and security of program participants’ data and communication
• •
MENTOR, 2001
Method for archiving e-mails to meet safety and evaluation needs
Table 16. Promotion and marketing policy. Best practice on e-mentoring DESIGN (a literature review) RECOMMENDATIONS
AUTHOR(S)
•
Year-round marketing and public relations
MENTOR, 2001
•
Prepare and distribute an information pack for teachers
Ross, 2004
Table 17. Safety measures. Best practice on e-mentoring DESIGN (a literature review) RECOMMENDATIONS •
Establishment of a code of online conduct guided by common sense, basic netiquette, and mutual respect
•
Adherence to rules and laws that apply in face-to-face mentoring, as well as those unique to online mentoring, for example, confidentiality of program participants’ personal information
•
Comprehensive background checks and screening of mentors
•
Process for raising and addressing concerns
•
Exit clause
AUTHOR(S)
MENTOR, 2001
E-Mentoring
Table 18. Recruitment plan. Best practice in e-mentoring program IMPLEMENTATION (a literature review) RECOMMENDATIONS •
Strategies that reflect accurate expectations and benefits
•
Targeted outreach based on mentees’ needs and interests
•
Volunteer opportunities beyond mentoring
•
Basis in your program’s statement of purpose and long-range plan
•
Recruit early, before participants are caught up in their daily activities
•
Design different “call for participants” and application forms for mentors and mentees
•
As well as electronic communication, use alternative recruitment mediums like newsletters, heads of department, student/staff representatives, and so forth.
•
Manage expectations carefully before training: program goals, eligibility criteria, frequency of expected contact, and so forth.
AUTHOR(S)
MENTOR, 2001
Single & Muller, 2005
Table 19. Eligibility screening. Best practice in e-mentoring program IMPLEMENTATION (a literature review) RECOMMENDATIONS •
Reference checks for mentors, especially if working with underage mentees
•
Suitability criteria that satisfy the program statement of purpose and needs of the target population (could include personality profile, skills identification, gender, age, geography, language, race, career interests, level of education, previous volunteer experience, and so forth)
AUTHOR(S)
MENTOR, 2001
Table 20. Induction and orientation. Best practice in e-mentoring program IMPLEMENTATION (a literature review) RECOMMENDATIONS
AUTHOR(S)
• Successful completion of training and orientation • Separate orientation for mentors and mentees Include: a. b. c. d. e. f. g. h. i. j. k.
Reinforce expectations: jobs/roles descriptions, restrictions (accountability) Description of technology used and access needed Level of commitment expected (time, energy, flexibility, frequency) Benefits and rewards of participation Summary of program policies, including those governing privacy, reporting, communications, and evaluation Safety and security, especially around use of the Internet Cultural/heritage sensitivity and appreciation training Do’s and Don’ts of managing the relationships Crisis management/problem-solving resources Support materials and ongoing sessions as necessary Suggestions on how to get started
• Decide on a method for delivery: face-to-face, online, or blended. If choosing online, options are: - Moderated discussion groups - Web-based threaded discussion lists - Web-based training tutorial based on case studies, sample responses, simulation, and so forth.
MENTOR, 2001
Single & Muller (2005)
E-Mentoring
Table 21. Coaching. Best practice in e-mentoring program IMPLEMENTATION (a literature review) RECOMMENDATIONS • • • • •
•
AUTHOR(S)
Guide the e-mentoring pairs along the relationship, starting with initiation and moving through cultivation, separation, and redefinition (Kram, 1985) Coach in a networked environment, using messages containing discussion suggestions, mentoring tips, and so forth. Keep coaching messages short and frequent (weekly or every other week) Conclude coaching messages by soliciting feedback from the participants
Single & Muller (2005)
Consider techniques that address the development of the participant’s expectations and role acquisition: - Iterative cycles: give participants the chance to experience different mentors and mentees - Direct facilitation: interaction by a third party, who follows and participates in the mentoring dialogue, assisting, and suggesting - Open access to models: shared electronic workspace that allows mentors and mentees to observe and learn others’ e-mentoring relationships
O’Neill & Harris (2005)
Deal with lurkers: check all participants know how to post/reply to messages, provide test areas and arrivals areas, have a free-flowing social conferencing area, give participants time to get used to the online environment, provide areas for safe reflections and comments
Salmon (2000)
Table 22. Matching and re-matching. Best practice in e-mentoring program IMPLEMENTATION (a literature review) RECOMMENDATIONS
AUTHOR(S)
•
Grounding in the program’s eligibility criteria
MENTOR, 2001
•
Choose a matching method: - Participant choice: works best when those available for listing are plentiful and when one group will be recruited before the other group; however, it may give place to inappropriate matching and to participants not having a match - Unidirectional matching: mentees identify preferences for a mentor, the coordinator matches mentees’ preferences with mentors’ characteristics - Bidirectional matching: both mentees and mentors identify preferences for e-mentoring partners, the coordinator takes into account all preferences Let mentors and mentees know the process by which they will be matched Allow the participants to review, accept, or reject their e-mentoring partnerships
Single & Muller (2005)
• •
Table 23. Monitoring. Best practice in e-mentoring program IMPLEMENTATION (a literature review) RECOMMENDATIONS
AUTHOR(S)
• Consistent and regular communications with staff, mentors, and mentees • Tracking system for ongoing assessment • Written records
MENTOR, 2001
• Guidelines for support and conflict resolution • Rationale for the selection of this particular monitoring strategy • Monitor e-mails systematically
Ross, 2004
E-Mentoring
Table 24. Support, recognition and retention. Best practice in e-mentoring program IMPLEMENTATION (a literature review) RECOMMENDATIONS •
Formal kick-off
•
Process for managing grievances, rematching, interpersonal problem solving, handling crises, and bringing closure to the relationships that end prematurely
•
Ongoing peer support for volunteers
•
Social gatherings of different groups as appropriate
•
Ongoing recognition and appreciation
•
Newsletters of other communications to mentees, mentors, and support staff
•
Program Web site with a participant guideline posted on it
•
Keep a closed mentor list, so mentors can get feedback and advice from each other. A moderator prompts early introductions and periodically seeds the list with discussion topics
AUTHOR(S)
MENTOR, 2001
Single & Muller (2005)
Table 25. Closure steps. Best practice in e-mentoring program IMPLEMENTATION (a literature review) RECOMMENDATIONS •
Private and confidential exit interviews to debrief the mentoring relationship between mentees and staff, mentors, and staff and mentors and mentees
•
Clearly stated policy for future contacts between mentors and mentees
•
Assistance for mentees in defining next steps for achieving personal goals
•
Organize a formal end to the programs, which might include a celebration and certificates
AUTHOR(S)
MENTOR, 2001
Table 26. Types of data collected. Best practice in e-mentoring program EVALUATION (a literature review) RECOMMENDATIONS • •
Obtain benchmarking data after pilot program During and after the program, collect three types of information: - Involvement data: frequency of interactions, continuation of mentoring relationships for the duration of the program - Formative data: participants’ satisfaction with the program, examinations of the matching protocol and content of the mentoring interactions, which will guide the future enhancement of the program - Summative data: assessment of program goals achieved, which serve as a standard for comparison with a control group (students that do not undergo mentoring), address sustainability and expansion with stakeholder and founders as main audience
AUTHOR(S)
Single & Muller (forthcoming)
E-Mentoring
Table 27. Moment. Best practice in e-mentoring program EVALUATION (a literature review) RECOMMENDATIONS •
Ongoing evaluation rather than at the end of the program
AUTHOR(S) MENTOR, 2001
Table 28. Dissemination. Best practice in e-mentoring program EVALUATION (a literature review) RECOMMENDATIONS •
Consideration of the information needs of the program’s board, founders, communication partners, and other supporters
•
Sharing of program information and lessons learned with program stakeholders and the broader mentoring community
AUTHOR(S)
MENTOR, 2001
Table 29.“Don’ts” on e-mentoring (a literature review) On planning and running an e-mentoring program, don’t… … rush or under-estimate the time required to set up and plan the scheme—include the coordinator, mentor, and mentee training … commit to a long-term scheme initially … assume mentees and mentors have good e-mail skills or easy access to equipment … assume the software will deal with all risks or that everything is running smoothly … let information technology to “take over the show.” At best, IT must enable participants to meet their traditional goals in a better way that was practically possible before … engage in e-mentoring if you do not have experience in face-to-face mentoring … do it for marketing or public relations purposes, but only when a genuine need is perceived and a realistic plan can be implemented long-term … use it as a replacement for a face-to-face mentoring program particularly with populations at high risk (failure, violence, and so on)
E-Mentoring
practice as published by diverse authors and has been divided in three main program phases: (a) design and planning, (b) implementation, and (c) evaluation. Recommendations and sources are summarized in Tables 12-28 respectively. Programme managers should also keep in mind some important “don’ts” on planning and running a e-mentoring program, as recommended by Ross (2004) and O’Neill, et al. (2002) and summarized in Table 29.
Future PrActIce dIrectIons The main threat to survival of many e-mentoring initiatives is that of long-term sustainability. In the case of large projects in the U.S. O’Neill et al. (2002, p. 7) see in the next years a swift move from national scale, generalist programs to more localized and customized versions when they state, “The most important issue for e-mentoring as it moves into the future is tailoring e-mentoring initiatives to fit local needs. (...) even so if this means working in a less organized way and with fewer resources.” The authors go on to point to the importance of creating software and guidance materials that will assist in the development of small e-mentoring initiatives in those circumstances where local knowledge is very critical to success. This may be the way forward for initiatives like Aimhigher in the UK, which has just been granted an extra year of “grace” after the initiative had officially run out of governmental support. As e-mentoring becomes a more widely known and accepted modality of support, its permeation in Europe is greatly likely to increase, partially as a result from the emphasis placed on longlife learning. It is important however that the expertise developed at grassroots level is harnessed and made the most of to contribute to the success of new, larger initiatives in the European context. Best case scenario, the next years will witness the consolidation of national and cross-national
communities of practice that promote sharing of knowledge and resources. Mutual collaboration is likely to reinforce sharing of expertise and resources that combine mentoring with other student-centered methodologies, as well as programmatic efforts based on best practice and demonstrated outcomes. As Haaris notes (O’Neill et al., 2002): The kind of skills, sensibilities, and problem solving abilities that will be necessary to succeed in an increasingly complex and technologically saturated society will not be developed in learners who look to the technology to teach them. E-mentoring is an excellent and natural vehicle for starting to create authentic, learner centered instruction of this rich and complex variety. (p. 11)
Future reseArch dIrectIons It has been noted that the proliferation of online mentoring programs has been underpinned by very practical reasons of access and convenience. However, the benefits of these initiatives has been often assumed rather than demonstrated, and their positive outcomes have largely been based on speculation and anecdotal evidence. Compared to the plethora of Web sites connecting mentors and mentees, very little research has been done on program effectiveness. With some exceptions (Asgari & O’Neill, 2004; Calder, 2004; Carlsen & Single, 2000; Dewart et al., 2004; HeadlamWells, 2004; Headlam-Wells et al., 2005; O’Neill & Harris, 2000), it tends to be the case that follow up research on the benefits of mentoring are much less frequent than the introduction of such program. Comprehensive literature reviews and theoretical papers on the subject also scarce, again with exclusion of the work of a few notable authors (Bierema & Merriam, 2002; Ensher et al., 2003; Harrington, 1999; Harris, O’Bryan, & Rotenberg, 1996; O’Neill, 2004; Perren, 2003; Single & Muller, 1999; P. B. Single & R. M. Single, 2005b).
E-Mentoring
Moreover, existing research agendas have been often outlined from a comparative perspective between e-mentoring and traditional face-to-face programs (for example in Ensher et al., 2003), rather than by treating e-mentoring in its own right. Future research questions should, rather, gravitate around the opportunities and limitations that e-mentoring brings, how to monitor mentoring relationships most effectively, what are the ethical and policy issues involved in keeping electronic records of the interactions, how to evaluate most effectively e-mentoring programs, and so on. Much more can also be done to benchmark e-mentoring practices across different contexts. By comparing the potential and dangers of e-mentoring in the academic world and working life. Single and Muller (2005, pp. 13-19) suggest some possible research questions in this direction: • • •
•
•
•
What motivates mentors to volunteer for such programs? Which matching variables are more strongly related to successful outcomes? How do matching methods and closeness of match influence mentoring outcomes, such as involvement in the program and the benefits gained by both the mentors and the students? Which are the most effective and efficient methods for training delivery? And do these depend on the type and the size of the mentor and mentee populations? How frequently should coaching messages be sent? What content is most useful for those engaged in online mentoring? What is an acceptable benchmark level for involvement with an e-mentoring program
In line with Harrington’s (1999) suggestions, future exploration of e-mentoring programs should also move away from positivist approaches towards inquiries into social activity. What is clear is that at this stage, sharing research and
practice across institutions and countries is indispensable.
concLusIon In summary, the experiences and research presented paint a picture of e-mentoring which is diverse and packed with venues for creativity. It was said at the beginning of this chapter that the practice of e-mentoring developed upon the foundations of the large amount of research in its face-to-face modality. However, the standpoint of this chapter is that by measuring the effectiveness and efficacy of an e-mentoring program using traditional face-to-face arrangements as a benchmark, the initial rationale for setting up ementoring programs is defeated. In other words, if organizers come from the belief that e-mentoring is a quick and economical choice that substitutes appropriate support structure with a snazzy Web site, taking away the pain of the administration and monitoring; then a case for keeping traditional face-to-face at all costs should be made. However, if emphasis is placed on the relation between mentor and mentee, on the importance of screening, training, and supporting mentors, and on sound program evaluation, then the question is what can e-mentoring do for newcomers that we would not have reached in a traditional program.
reFerences Allen, T. D., McManus, S. E., & Russell, J. E. A. (1999). Newcomer socialization and stress: Formal peer relationships as a source of support. Journal of Vocational Behaviour, 54(3), 453-470. Asgari, M., & O’Neill, K. (2004). What do they mean by success? Contributors to perceived success in a telementoring program for adolescents. Paper presented at the Annual Meeting of the American Educational Research Association, San Diego, CA.
E-Mentoring
Beitler, M. A., & Frady, D. A. (2002). E-learning and e-support for expatriate managers. In H. B. Long & Associates (Eds.), Twenty-first century advances in self-directed learning (CD). Boynton Beach, FL: Motorola University. Bierema, L. L., & Merriam, S. B. (2002). E-mentoring: Using computer mediated communication to enhance the mentoring process. Innovative Higher Education, 26(3), 211-227. Blurton, C. (2000). New directions of ICT-use in education. UNESCO World Communication and Information Report. Paris: UNESCO. Calder, A. (2004). Online learning support: An action research project. James Cook University. Paper presented at 4th Pacific Rim First Year Experience Conference at Queensland University of Technology. Brisbane, Australia. Carlsen, W., & Single, P. B. (2000). Factors related to success in electronic mentoring of female college engineering students by mentors working in industry. Paper presented at the Annual Meeting of the National Association for Research in Science Teaching, New Orleans, LA. Conway, C. (1998). Strategies for mentoring: A blueprint for successful organizational development. New York: John Wiley and Sons. De Smet, M., Van Keer, H., & Valcke, M. (2008). Blending asynchronous discussion groups and peer tutoring in higher education: An exploratory study of online peer tutoring behaviour. Computers and Education, 50(1), 207-223. Dewart, H., Drees, D., Hixenbaugh, P., & Williams, D. (2004, April 5-7). Electronic peer mentoring: A scheme to enhance support and guidance and the student learning experience. Paper presented at the Psychology Learning and Teaching Conference, University of Strathclyde, Glasgow, UK. Eby, L. T. (1997). Alternative forms of mentoring in changing organizational environments: A
conceptual extension of the mentoring literature. Journal of Vocational Behaviour, 51, 125-144. Ensher, E. A., Heun, C., & Blanchard, A. (2003). Online mentoring and computer-meadiated communication: New directions in research. Journal of vocational behaviour, 63, 264 - 288. Gisbert, M. (2004). Las TIC como motor de innovación de la Universidad. En SANGRÀ, A. Y GONZÁLEZ, M. (coord.): Barcelona. Ed. UOC. In A. Sangrà & M. Gonz’alez (Eds.), La transformación de las universidades a través de las TIC: Discursos y prácticas (pp. 193-197). Barcelona: Ed. UOC. Gray, M. M., & Gray, W. A. (1990). Planned mentoring: Aiding key transitions in career development. Career Planning and Adult Development Journal, 6(3), 27-32. Hamilton, B. A., & T.A., S. (2003). E-mentoring: Implications for organizational learning and development in a wired world. Organizational Dynamics, 31(4), 388-402. Harrington, A. (1999). E-mentoring: The advantages and disadvantages of using e-mail to support distant mentoring. The Coaching and Mentoring Network Articles. Retrieved October 17, 2007, from http://www.coachingnetwork. org.uk/ResourceCentre/Articles/ViewArticlePF. asp?artId=63 Harris, J., O’Bryan, E., & Rotenberg, L. (1996). It’s a simple idea, but it’s not easy to do! Practical lessons in telementoring. Learning and Leading with Technology. Retrieved October 17, 2007, from http://emissary.wm.edu/templates/content/ publications/October96LLT.pdf Headlam-Wells. (2004). E-mentoring for aspiring women managers. Women in Management Review, 19(4), 212-218. Headlam-Wells, J., Gosland, J., & Craig, J. (2005). There’s magic in the Web: E-mentoring for women’s career development. Career Development International, 10(6-7), 444-459.
E-Mentoring
Kasprisin, C. A., Single, P. B., Single, R. M., & Muller, C. B. (2003). Building a better bridge: Testing e-training to improve e-mentoring programmes in higher education. Mentoring and Tutoring, 11(1).
and perspectives and developmental needs of participants in telementoring programs. Paper presented at the Annual Meeting of the American Educational Research Association, New Orleans. LA.
Kram, K. E. (1985). Mentoring at work: Developmental relationships in organizational life. New York: University Press of America.
O’Neill, K., & Harris, J. (2005). Bridging the perspectives and developmental needs of all participants in curriculum-based telementoring programs. Journal of Research on Technology in Education, 37(2), 111-128.
McCormick, N., & Leonard, J. (1996). Gender and sexuality in the cyberspace frontier. Women & Therapy, 19, 109-119. McLean, M. (2004). Does the curriculum matter in peer mentoring? From mentee to mentor in problem-based learning: A unique case study. Mentoring and Tutoring, 12(2), 173-186. McMahon, M., Limerick, B., & Gillies, J. (2002). Structured mentoring: A career transition support service for girls. Australian Journal of Career Development, 11(2), 7-12. MENTOR. (2001). US quality standards for e-mentoring: Elements of effective practice for e-mentoring. E-Mentoring Clearinghouse. Retrieved October, 2005, from www.mentoring. org/emc Miller, A. (2004). E-mentoring: An overview. Paper presented at the First Aimhigher Networking Meeting, Aston University. Moore, G. R. (1991). Computer to computer: Mentoring possibilities. Educational Leardership, 49(3), 40. Noe, R. A. (1998). An investigation of the determinants of successful assigned mentoring relationships. Personnel Psychology, 41, 457-479. O’Neill, K. (2004). Building social capital in a knowledge-building community: Telementoring as a catalyst. Interactive Learning Environments, 12(3), 179-208. ONeill, K., & Harris, J. (2000, April 24-28). Is everybody happy? Bridging the perspectives
O’Neill, K., Harris, J., Cravens, J., & Neils, D. (2002). Perspectives on e-mentoring: A virtual panel holds an online dialogue. National Mentoring Center Newsletter, 9, 5-12. O’Regan. (2006). MOLIE: Mentoring online in Europe. Salford: University of Salford. Perren, L. (2003). The role of e-mentoring in entrepreneurial education and support: A metareview of academic literature. Education and Training, 45(8-9), 517-525. Rhodes, J. E. (2002). A critical view of youth mentoring. Boston. Ross, B. (2004). First aimhigher e-mentoring networking meeting. Birmingham: Middlesex University. Salmon, G. (2000). E-moderating. The key to teaching and learning online. London: Kogan Page. Single, P. B., & Muller, C. B. (1999). Electronic mentoring: Issues to advance research and practice. Paper presented at the Annual Meeting of the International Mentoring Association, Atlanta, GA. Single, P. B., & Muller, C. B. (2000, April 2428). Electronic mentoring: Quantifying the programmatic effort. Paper presented at the Annual meeting of the American Educational Research Association, New Orleans, LA.
E-Mentoring
Single, P. B., & Muller, C. B. (2001). When email and mentoring unite: The implementation of a nationwide electronic mentoring program-MentorNet, the national electronic industrial mentoring network for women in engineering and science. American Society for Training and Development. Single, P. B., & Muller, C. B. (2005). Electronic mentoring programs: A model to guide practice and research. Mentoring and Tutoring, 13(2), 305-320. Single, P. B., & Muller, C. B. (forthcoming). Electronic mentoring programs: A model to guide practice and research. Retrieved January 2008 from www.apesma.asu.au/mentorsonline/reference/pdfs/muller_and_boyle_single.pdf Single, R. M., Muller, C. B., Cunningham, C. M., & Single, R. M. (2000). Electronic communities: A forum for supporting women professionals and students in scientific fields. Journal of Women and Minority? Engineering, 6, 115-129. Single, P. B., & Single, R. M. (2005a). E-mentoring for social equity: Review of research to inform program development. Mentoring and Tutoring, 13(2), 301-320. Single, P. B., & Single, R. M. (2005b). Mentoring and the technology revolution: How face-to-face mentoring sets the stage for e-mentoring. In F. K. Kochan & J. T. Pascarelli (Eds.), Creating successful telementoring programs (pp. 7-27). Greenwich: Information Age Press. Smith, T., & Ingersoll, R. (2004). What are the effects of induction and mentoring on beginning teacher turnover? American Educational Research Journal, 41(3), 681-714. Stokes, A. (2001). Using telementoring to deliver training to SMEs: A pilot study. Education + Training, 43(6), 317-324. Stokes, P., Garrett-Harris, R., & Hunt, K. (2003). An evaluation of electronic mentoring (e-men-
0
toring). Paper presented at the 10th European Mentoring & Coaching Conference. Tesone, D. V., & Gibson, J. W. (2001, October). E-mentoring for professional growth. Paper presented at the IEEE International Professional Communication Conference, Santa Fe, NM. Wadia-Fascetti, S., & Leventman, P. G. (2000). E-mentoring: A longitudinal approach to mentoring relationships for women pursuing technical careers. Journal of Engineering Education, 89(3), 295-300.
AddItIonAL reAdIng Bierema, L. L., & Merriam, S. B. (2002). E-mentoring: Using computer mediated communication to enhance the mentoring process. Innovative Higher Education, 26(3), 211-227. Boyer, N. R. (2003). Leaders mentoring leaders: Unveiling role identity in an international online environment. Mentoring and Tutoring, 11(1), 25-37. Clutterbuck, D., & Cox, T. (2005, November). Mentoring by wire. Training Journal, 35-39. Crocitto, M., Sullivan, S. E., & Carraher, S. M. (2005). Global mentoring as a means of career development and knowledge creation: A learning based framework and agenda for future research. Career Development International, 10(6/7). Duff, C. (2000). Online mentoring. Educational Leardership, 58(2), 49-52. Eby, L. T. (1997). Alternative forms of mentoring in changing organizational environments: A conceptual extension of the mentoring literature. Journal of Vocational Behaviour, 51, 125-144. Echavarria, T. e. a. (1995). Encouraging research through electronic mentoring: A case study. College & Research Libraries, 56(4), 352-361.
E-Mentoring
Ensher, E. A., Heun, C., & Blanchard, A. (2003). Online mentoring and computer-meadiated communication: New directions in research. Journal of vocational behaviour, 63, 264-288. Ensher, E. A., Thomas, C., & Murphy, S. E. (2001). Comparison of traditional, step-ahead, and peer mentoring on proteges’ support, satisfaction, and perceptions of career success: A social exchange perspective. Journal of Business and Psychology, 15(3), 419-438. Haas, A., Tulley, C., & Blair, K. (2002). Mentors versus masters: Women’s and girls’ narratives of (re)negotiation in Web-based writing spaces. Computers and Composition, 19(3), 231-249. Hamilton, B. A., & Scandura, T. A. (2003). E-mentoring: Implications for organizational learning and development in a wired world. Organizational Dynamics, 31(4), 388-402. Hawkridge, D. (2003). The human in the machine: Reflections on mentoring at the British Open University. Mentoring and Tutoring, 11(1), 15-24.
Kasprisin, C. A., Single, P. B., Single, R. M., & Muller, C. B. (2003). Building a better bridge: Testing e-training to improve e-mentoring programmes in higher education. Mentoring and Tutoring, 11(1). Lavin Colky, D., & Young, W. (2006). Mentoring in the virtual organization: Keys to building successful schools and businesses. Mentoring and Tutoring, 14(4), 433-447. Lee, H., & Noh, S. (2003). Educational use of ementoring to encourage women into science and engineering. Lecture Notes in Computer Science, 2713, 75-84. Mahayosnand, P. (2000). Public health e-mentoring: An investment for the next millennium. American Journal of Public Health, 90(8), 1317. O’Neill, K. (2004). Building social capital in a knowledge-building community: Telementoring as a catalyst. Interactive Learning Environments, 12(3), 179-208.
Headlam-Wells, J., Craig, J., & Gosland, J. (2006). Encounters in social cyberspace: E-mentoring for professional women. Women in Management Review, 21(6), 483-499.
O’Neill, K., & Harris, J. (2005). Bridging the perspectives and developmental needs of all participants in curriculum-based telementoring programs. Journal of Research on Technology in Education, 37(2), 111-128.
Headlam-Wells, J., Gosland, J., & Craig, J. (2005). There’s magic in the Web: E-mentoring for women’s career development. Career Development International, 10(6-7), 444-459.
O’Neill, K., Harris, J., Cravens, J., & Neils, D. (2002). Perspectives on e-mentoring: A virtual panel holds an online dialogue. National Mentoring Center Newsletter, 9, 5-12.
Henderson, K. L. (1996). Electronic “keyboard pals”: Mentoring the electronic way. Serials Librarian, 29(3-4), 141-164.
O’Neill, K., Weiler, M., & Sha, L. (2005). Software support for online mentoring programs: A research-inspired design. Mentoring and Tutoring, 13(1), 109-131.
Hixenbaugh, P., Dewart, H., Thorn, L., & Drees, D. (2005). Peer e-mentoring: Enhancement of the first year experience. Psychology Learning and Teaching, 5(1), 8-14. Hunt, K. (2005). E-mentoring: Solving the issue of mentoring across distances. Development and Learning in Organizations, 19(5), 7-10.
Paul, R. (2003). Electronic mentoring. School Administrator, 60(10), 26-29. Perren, L. (2003). The role of e-mentoring in entrepreneurial education and support: A metareview of academic literature. Education and Training, 45(8-9), 517-525.
E-Mentoring
Rhodes, J. E. (2002). New directions for youth development: Theory, practice and research: A critical view of youth mentoring. San Francisco: Jossey-Bass. Richard, K. (2004). E-mentoring and pedagogy: A useful nexus for evaluating online mentoring programs for small business? Mentoring and Tutoring, 12(3), 383-401. Rogan, J. M. (1997). Online mentoring: Reflections and suggestions. Journal of Computing in Teaching, 13(3), 5-13. Russell, A., & Perris, K. (2003). Telementoring in community nursing: A shift from dyadic to communal models of learning and professional development. Mentoring and Tutoring, 11(2), 227-239. Sinclair, C. (2003). Mentoring online about mentoring: Possibilities and practice. Mentoring and Tutoring, 11(1), 79-95. Single, P. B., & Single, R. M. (2005). E-mentoring for social equity: Review of research to inform program development. Mentoring and Tutoring, 13(2), 301-320.
Stephenson, S. (1998). Distance mentoring. Journal of Educational Technology, 26(2), 181-186. Villar, L., & Alegre, O. (2006). Online faculty development in the Canary Islands: A study of e-mentoring. Higher Education in Europe, 31(1), 65-81. Vincent, A. (1999). Using telementoring to overcome mentor shortages: A process model. International Journal of Management, 16(3), 413-421. Wadia-Fascetti, S., & Leventman, P. G. (2000). E-mentoring: A longitudinal approach to mentoring relationships for women pursuing technical careers. Journal of Engineering Education, 89(3), 295-300. Weber, R. (2001). Click on to e-mentoring. People Dynamics, 19(8), 28-39. Woodd, M. (1999). The challenge of telementoring. Journal of European Industrial Training, 23(3), 140-144.
Chapter V
Training Teachers for E-Learning, Beyond ICT Skills Towards Lifelong Learning Requirements: A Case Study
Olga Díez CEAD Santa Cruz de Tenerife, Spain
ABstrAct This chapter describes an experience in teacher training for e-learning in the field of adult education. It takes into account the models offered by flexible lifelong learning as the proper way to develop training for teachers in service, considering the advantages of blended learning for the target audience. The chapter discusses the balance between mere ICT skills and pedagogical competences. In this context the learning design should always allow that the teachers in training integrate in their work ICT solutions that fit to the didactic objectives, renew teaching and learning methodology, facilitate communication, give place to creativity, and allow pupils to learn at their own pace. By doing so, they will be closer to the profile of a tutor online, as a practitioner who successfully takes advantages of the virtual environments for collaborative work and learning communication.
Copyright © 2008, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
Training Teachers for E-Learning
IntroductIon The aim of this chapter is to focus on the features a teacher training course has to fulfil, in order to facilitate in teachers the use of ICT as a tool to reach learning goals. Skills and competences are to be developed to guarantee that teachers not only are able to make proper use of computers, but also, and most important, that they are aware of the mayor challenges ICT brings as a powerful means of communication and as an emerging new pedagogical model. A case study is offered to point out possible approaches to develop training programmes.
BAckground Teaching training is a steady challenge in the always evolving learning the knowledge society requires. It is nowadays a common place to point out the advantages of ICT integration into school as a proper way to transform information into knowledge (Barberà-Badia, 2004). As shown in a survey developed for the European Union by the European Schoolnet (2005), in the last years a great effort has been made to ensure the presence of ICT in every school through the necessary infrastructure. As a result, more than 80% of the European teachers describe themselves as competent in using computers and the Internet in classroom situations; two-thirds dispose of the necessary motivation for doing so (according to their own opinion), and 60% describe the ICT infrastructure in their schools and the Internet connection as sufficiently rapid. This means that most teachers use computers in their everyday work, but, on the other hand, some are still reluctant to do so, mainly those who claim that a subject does not lend itself to being taught with computers, or that there is a lack of proper didactic contents.
The point of view of the teacher staff may lead us to conclude that most teachers are aware of the advantages of using ICT in education. It could not be otherwise. Computers are a part of our daily life and ICT skills are thus among the new basic skills, according to the Recommendation of the European Parliament and the Council of 18 December 2006 on Key Competences for Lifelong Learning (Recommendation, 2006). But if we observe at a certain scope, we can easily notice that the current use of ICT in classrooms is mainly related to information and data transfer and interactive exercises. This is closer to the Computer Aided Instruction (CAI), than to a truly e-learning system. In other words, the possibilities of the Internet as a tool to communication, collaborative learning, and development of social spaces for sharing and building knowledge remain still almost unexploited. For instance, the flexibility that e-learning offers to support and guide the learning activities of pupils who need to increase their learning result is an almost unexplored field. In the process of developing e-learning solutions for schools, teacher staff, policy makers, and other stakeholders are due to shift to a broader understanding of the possibilities of e-learning within the formal learning. From a content centred approach, that may help teachers in delivering instructional predefined contents. It is possible to reach a more flexible e-learning model, which also correlates to the lifelong learning objectives. This previous path is to be taken to ensure positive experiences for teachers and interesting learning results, and, accordingly, a natural shift to a more open minded use of the Web as a powerful way to build and share knowledge, which will probably bring us to the almost mythical realm of e-learning 2.0, often foretold as the future scenario that will allow learning in every possible human situation.
Training Teachers for E-Learning
FLexIBLe LIFeLong LeArnIng As A modeL For teAcher trAInIng Once stated the necessity of a broader training model for teachers, even in formal educational contexts, it is to be taken into consideration which kind of training programme is to be developed. As a matter of fact, many training courses are regularly offered to teacher staff by the Educational Departments in every European country. The local peculiarities of this offer make it difficult to establish a regular standard within the European Union and to design a proper common policy in teacher training. A common background is given by the Common European Principles for Teacher Competences and Qualifications (European Commission, 2005), where the European Union views the role of teachers and their lifelong learning and career development as key priorities. Teachers should be equipped to respond to the evolving challenges of the knowledge society, participate actively in it, and prepare learners to be autonomous lifelong learners. The key competences of teachers are set as follows. Teachers should be able to: • • •
Work with others Work with knowledge, technology, and information Work with and in society
To facilitate such an approach, ICTs are not only a means distribute training course for teachers at service, but also the logical environment where these three dimensions of the key competences are to be developed and fulfilled. A proper use of ICT empowers the abilities needed for collaborative work, as well as requires an autonomous use of information sources, its selection and delivery, and allows teachers to keep in touch with a steady changing society, into which their pupils are to become active citizens.
The report entitled “Assessment Schemes for Teachers’ ICT Competence-A Policy Analysis,” developed for the European Union by the European Schoolnet (2005), includes some remarkable key findings: •
•
•
•
In the future more detailed job descriptions and specialized training profiles are needed for different actors in schools to cater for a personalised training. Training policies face the challenges to be flexible enough for short term adjustments of changing training needs and incorporating long term goals and objectives that are important for teachers to identify with. Countries will need to think of offering new and flexible forms of training for teachers, at different times, at different places, with different means, but much more related to the concept of lifelong learning. This includes a shift in the culture of the teaching profession from a passive consumer of training courses to an active producer and organiser of its own learning process. Training policies can only be successfully implemented and sustainable in the long term if they are part of an interlinked or integrated ICT strategy that caters for technology, pedagogy, support, organisational development, and (financial) solutions.
From this point of view, e-learning solutions are an interesting approach that allows flexible forms of training, but that have to be delivered under some conditions to ensure the quality of their results. E-learning is unfortunately a very broad term, which may lead readers to think of many different learning scenarios, and therefore, it seems to be necessary to define or, at least, to set the limit of the concept for the aims of this chapter. Computer and learning are the two basic ideas that come to our minds when trying to define elearning. Therefore, a first definition could point
Training Teachers for E-Learning
out this relationship. For instance: “The delivery of a learning, training or education program by electronic means. E-learning involves the use of a computer or electronic device (e.g., a mobile phone) in some way to provide training, educational or learning material” (Stockley, 2003). Such a definition involves the delivery of instruction via CD, the Internet, or shared files on a network. It is also called computer mediated learning. It is not surprising that a new definition of elearning is being developed, as far as a broader use of the Web has been reached. The so-called Web 2.0 (O’Reilly, 2005) enables a new definition of the concept, under the label of e-learning 2.0 (Downes, 2005; Jennings, 2005; Karrer, 2006). The interesting point of this concept is that elearning can no longer be defined only by the use of ICT itself, but by a certain use of the ICT. It includes communication, collaborative learning, social networks, and new roles for learners and teachers. But this supposed novelty is to be tracked back to the theories that stressed the change from a transmission model of knowledge transfer, to a learner-centred or activity-centred model (Gifford & Enyedy, 1999; Reeves, 1999; Vinicini, 2001; Wilson, 1995). The conventional classroom was the natural metaphor in which many learning management systems (LMS) and, even more importantly, most learning designers and content creators, developed the learning environment, from computer aided instruction (CAI) to many online courses. They order the sequence of information and focus on the structure of the disciplinary domain. But as far as it is possible to encourage communication, interaction, and collaboration in e-learning environments, this model is to be supplied with news methods that allow achieving orchestrated interdependence and autonomy in e-learning. The new idea is well summarized by the image of a community, a virtual learning community. (Cabero, 2006; García Aretio, 2003; Hudson, 2005; Paloff & Pratt, 1999). In the most
evolved development of these, we can find the virtual corelate to the community of practice, that is, “a shared domain of interest” where “members interact and learn together” and “develop a shared repertoire of resources.” In others words, it is the place where learning happens (Wenger, 1998). In this pedagogical approach, the new role of the teacher is a turning point for the development of e-learning (Kearsley, 2000). In the last years many e-learning courses have been developed at high schools, universities, and enterprises, and many lessons are to be learned from them. In the most successful experiences, the key factor is the presence of a specialized trainer who ensures the effectiveness of the Web-based learning process. This trainer is skilled and competent in interaction, communication, and knowledge building through virtual spaces. In other words, this professional is the tutor online, defined as follows by Seoane and García (in press): Tutor online is the teaching staff that follows a group of students on a part of their learning path, ensures the efficiency of teaching-to-learning process, promotes the achievement of aims and skills predicted for the academic initiative that he leads, by creating a context of collaborative and active learning, and evaluates how pre-established aims were achieved for students and for the academic intervention (quality management). Of course, the teacher staff in schools is yet far from reaching such a level of acquaintance and competence as is to be found in a proper tutor online. Nevertheless, according to the variant reality of schools and the different target learners they serve, in certain kinds of educational institutions teachers functions are getting nearer to this profile, as far as they have in their classroom an increasing variety of pupils. This is the situation of centres devoted to adult education, vocational training centres, and secondary schools providing courses to adult and young adult people who
Training Teachers for E-Learning
need to improve their educational outcomes and validate the skills developed in their job. In this field much is to be made in order to prepare ordinary teachers to become adult teachers with skills and competences allowing them to bring to their pupils an attractive, flexible, and accurate learning. In many cases this involves that they too have to learn the new skills needed in the knowledge society.
deveLoPIng teAcher trAInIng ProgrAmmes: Beyond Ict skILLs Teacher training as an efficient way to develop the skills needed for e-learning is not simple. As a matter of fact, it is a long-term aim which should be reached step by step through minor formative actions. The role of formal learning as a fist step towards lifelong learning is reinforced by the Recommendations of the European Parliament and the Council on key competences. Its first aim is to ensure that “initial education and training offers all young people the means to develop the key competences to a level that equips them for adult life, and which forms a basis for further learning and working life.” It is important to notice that e-learning involves the capability to acquire knowledge and develop skills within Web-based means. E-learning, when properly led, facilitates the metacognitive awareness needed in the field of “learning to learn.” Therefore, ICT in this context is just an enabler in a new means to learn that should also encompass several key elements such as learning design, collaborative learning, and social contexts. In spite of the fact that younger generations have grown up with ICT and are thus “digital natives” (Prensky, 2001), they are far from being digitally literate. Preliminary research released by Educational Testing Service (ETS) on November 14, 2006, shows that many students lack the critical
thinking skills to perform the kind of information management and research tasks necessary for academic success. On the other hand, most teachers are “digital immigrants.” This situation in the average classroom reflects the digital divide that currently exists in Europe. Furthermore, quite often teachers feel less competent than their pupils in this field, and this is the reason why they do not risk integrating ICT to a greater extent (Barnes, Marateo, & Ferris, 2007).
how is it Possible to train teachers in this Evolving Educational context? Therefore, when designing a teacher training course, a balance between technical and didactic contents is to be reached. In many cases the new ICT tools are introduced to teachers without pointing out clearly which are the didactic benefits they provide, or how far they could ease their daily work. Moreover, a great amount of funds are spent on courses whose results are rarely incorporated ino the daily work in the classroom. A few questions are to be asked when designing teacher training courses: • •
•
•
What kind of skills does the course intend to facilitate? Are these new skills profitable for teachers at the end of the course, or could they even take advantage of them as they are attending the course? If the didactic advantages are clear, is the related ICT presented as a means or does the course focus mainly on it? How far does the course allow teachers to develop their creativity to incorporate the new skills in their own learning context?
With these questions in mind, we will present the experience of a teacher training course that took place in 2005-2006. The study of this case will provide some basis for profitable conclusions.
Training Teachers for E-Learning
A case study: training teachers for Formal Adult education within open Learning methodology The Educational Department of the Regional Government of the Canary Islands, Spain, offered a training course to the teacher staff working for adult education. It was held during 5 months (from February to June 2006), and certificated 100 training hours. It was carried out as a blended learning course, that is, there was one compulsory faceto-face meeting per month. It took place at three different islands (Lanzarote, Tenerife, and Gran Canaria), and 246 teachers from the seven Canary Islands registered. The participants worked at primary schools, vocational training centres, secondary schools, and Escuelas Oficiales de Idiomas (schools devoted to foreign languages). The “Curso de educación de personas adultas en modalidad no presencial” had as its main goal to introduce adult education features, in order to develop the required skills for open education, using ICT as a helpful means. The general learning objectives of the training course were stated as follows: a. b. c.
To approach teachers to the theories and practices related to adult education. To deliver basic knowledge about the specifications of this field of education. To recognize and analyze the features of distance learning, the related methodology and specially the tutorial and advisor role of the teachers.
Accordingly, the course structure aimed to combine individual and group learning activities, supported on the Internet, through a learning management system (LMS), and completed with face-to-face sessions, once a month. Previously, all participants had to attend a workshop in order to get basic skills on the use of a LMS, both as a student and as a tutor. In this case, it was Moodle 1.5.4., a well-known course
management system designed to help educators create online courses with opportunities for rich interaction, integrating resources, and activities as well as assessment tools. The workshop was totally face-to-face, in groups of 20 participants, to allow hands-on experience with a computer under the guide of an instructor, during a total of 25 training hours. The contents of the course comprised five different thematic units: • • • • •
Adult education features Distance learning Tutoring in adult education Designing learning contents for adult education ICT supported distance learning
Every Unit was introduced by a face-to-face session in which some practical examples of the previous activities and units were given, the main topics of the new unit were underlined and directions for the further activities were offered. The face-to-face sessions were scheduled as large classes meetings (about 80 people) where the tutors acted mainly as traditional teachers, developing topics and giving general advise to follow the Unit. During the month, between face-to-face sessions, the teachers who had given a lecture in the ordinary one-to-many way, changed their function and supported open many-to-many discussion, as tutors online in the virtual main course. Therefore, during the five months the course was developed, every participant counted on the support and guidance of the tutor team, which, not only designed and delivered the learning contents and activities of each Unit, but also provided chat meeting, forum discussion, personal e-mail advice and technical support. At the end of the course, participants could choose between designing a learning Unit or creating learning content for a specific subject in the context of adult education.
Training Teachers for E-Learning
A BLended APProAch The course was developed under a blended form, as a proper way to initiate teachers into e-learning. Blended learning is indeed another evasive concept (Oliver & Trigwell, 2005) that some authors relate to the frustration of e-learning in general terms (Bernabé, 2004). But for the goals and features of the course contents and the target audience, it was the chosen model (Valiathan, 2002). The benefits of such a decision were the following: •
•
•
Organization of the course: As the number of participants was about 250 teachers with only four instructors, a completely online development of the course would have been very difficult to fulfil without a rather high rate of attrition. The long duration of the course was another factor of risk to be taken into account. (Diaz & Cartnal, 2006). Pace to develop ICT skills: The blended approach of the course shifted gradually from a full face-to-face beginning in the workshops to an almost complete online development for the final assessment (Driscoll, 2002). In the meantime, the monthly sessions allowed the instructors to reinforce the motivation of participants, present the best results of the proposed tasks, and increase the informal meeting of trainers and trainees at the coffee-breaks. Moreover, it allowed learners to gradually move form the traditional role in a classroom, to the active participation in the virtual classroom through forums and chats as a public way to share experience and build knowledge. Thus, the implementation of the course fostered the development of higher ICT skills as essential to the learning process. Course contents: The same course had been delivered in previous years through a classic distance learning schedule which involved a lot of individual work with a handbook,
•
and the fulfilment of individual tasks to be delivered at the monthly meeting; the use of a computer was previewed as a way to deliver written contents to the participants and to allow them to ask questions in between. The blended form allowed the reutilization of written contents and made a step forward, as the virtual classroom was the central point of the course and the face-to-face sessions were intended to reinforce the online learning. Learner centred methodology: Due to the very broad variety of interest, working contexts, and previous experience on ICT and adult learning of participants, the blended approach made it easier to present the common points and bring together the different learning situations in the face-to face classroom, and to work in more detail the difficulties and interest of participants almost on demand, in the virtual classroom. It was possible to minimize the tendency to dispersion of participants that grows as a long term online course develops, and helped trainees to keep in mind the main goals of the course.
The course was led and coordinated by a team of four tutors who were actually teachers at the same educational levels as the participants and had a broad experience in adult open education, creation of learning content, online tuition, and as teacher trainers. Apart from the “common main course,” in the virtual environment every participant had a “practice course” to test and develop the contents and activities of the course. Therefore, they developed a double role in the virtual environment, as students in the “common main course,” and as teachers in their own “practice course.” On demand, 288 practice courses were implemented, as participants could choose whether to develop their tasks alone or in small collaborative teams.
Training Teachers for E-Learning
It was in the virtual common course where the social dimension of the proposed learning path took place. Beside unit contents and tests, special attention was driven to foster and promote the use of communicative tools such as forums, chats, and internal messages. E-mail was another possibility to ask tutors for help or advice, but its use was limited to the moments when strong technical problems took place within the virtual environment. The forums were the main way to develop communication throughout the whole course. A glance at the many logs they received made it clear: there were 42,816 logs in all the available forums (i.e., 147 logs per participant), being the general forum the most visited. It was the place not only for general matters about the course, but mainly to share experiences, to make open questions, and to recommend further information or Web sites always in the scope of the aims of the course. Most of this discussion was started out by the participants and sometimes produced long threads of conversation, often moderated by the tutors. Chat was used only by recommendation of the tutors as a part of the contents of the course, not having an important role in other situations. Internal messages were used mainly to keep in touch with other participants, while the main way to ask for advice to the tutors was the forums. The tutors always answered in less than 12 hours, being the average time of answer 2 hours after the question was made. Another particular feature of the course was the three online workshops, devoted to technical issues that participants might need, when creating their own contents and courses. The goal of these workshops, as stated in the course syllabus, was to improve the digital literacy and ICT skills, in accurate information search in the Web, authoring tools and standards contents formats, as pdf, and audio files creation. Though attendance at these workshops was not compulsory, almost every participant took part in at least one of them. As they
0
began to develop self-made contents, they became more aware of the fact that surfing the Internet is not so easy, making digital contents properly accessible through the Web requires some special attention, and multimedia learning contents were something they could experiment with. By the end of the course, only 26 participants had never entered the course, and for the rest of them only 5 delayed in the delivery of the activities required for assessment. The initial dropout rate was 10%, but the number of participants throughout the whole course stayed the same. At the end of the course, a evaluation questionnaire was answered by the participants. It considered course development and organization, tutors, work, communicative skills, and adequacy, usefulness, and interest of course contents. Unfortunately, the results are not yet available from the department that carried out the course. Nevertheless, according to the posts sent by participants after the end of the course, it was most successful. They reported to have learned a lot and were interested in attending further courses of this kind because of it flexibility and quality. When asked after the final on-ground meeting, the tutors also expressed their satisfaction with the development of the course, the attendance of participants, and the learning results.
Learning design and Learning outcomes Some important issues from this reported course could be summarized as follows. In spite of the fact that the aim of the course was to introduce teachers into adult education and lifelong learning and to enable them to create specific learning contents for adults, the final results also included other outcomes. •
About one third of the teachers had never before used online communication tools such as forums, chat, or messages. Many of them considered these to be part of the
Training Teachers for E-Learning
•
•
•
younger generation’s habits. Through the steady use of them, they were aware of their learning usefulness as means not only to foster motivation and social skills, but also to generate a more accurate learning. The use of these tools also had as a result that most teachers could express more clearly their own expectations during the course, being thus a way to improve metacognitive skills. The forum was a great help to reaching better learning outcomes, but it is also remarkable that some of the participants also stated that though they were rather “lurkers” at the forums, as they felt uneasy when sending posts. This was not an obstacle to reach the course objectives. In other words, their learning styles did not suit for active public written participation, but they could benefit from the group interaction, merely as lurkers. In spite of the fact that collaborative learning was not a goal in this particular course, nor its chosen methodology, the communication flow was so rich that it introduced some kind of collaborative synergy that was present in the final activities. The course benefited from a flexible design that allowed the tutors to adapt it to the demands of participants. It seems that in this case, the proper use of a LMS, like Moodle, as the main space for communication, made it possible that the on-ground meetings were considered more as an introduction to the tasks that were proposed to be fulfilled during the following month than as the core of the course. From the point of view of the tutors, the core of this course was the interaction and the work carried out in the virtual environment, while the face-toface sessions were rather a companion to this than the contrary. As usually happens, participants wanted to learn real things, ideas, tips, and resources that could easily improve their work with adult pupils. And
•
•
by doing so, they were involved step by step in a new ICT environment and tested new technological tools because they could foresee the benefit of them. Of course, there were participants who did not learn as much ICT during the course. But for most of them this was the fist time they had to harmonize their daily routines, their work at school, and their virtual and almost daily presence in the course during several months. They wanted to take the best advantage from their effort experiencing thus by themselves some of the conditions their adult learners have to face in order to obtain valuable learning results. Furthermore, the use of peer to peer communication made it possible in the most remarkable cases to investigate the use of a LMS as a virtual environment relevant for learning activities as well as a for collaborative work and for the dissemination of teaching experiences and strategies. Under these circumstances the first steps to develop a virtual community of teachers could have been taken, if the required leading conditions to sustain it had been given.
concLusIon: From A LIst oF skILLs to A set oF comPetencIes As stated by the European Parliament and the Council, the aims of education are “personal fulfilment, active citizenship, social cohesion and employability in a knowledge society.” In such a social context a broad educational policy is needed. Teaching, even in formal contexts, deals no more with the transmission of a set of predefined learning contents, but it shifts towards the development of capacities that enable citizens to adapt dynamically to a rapidly changing world. From this starting point, it is obvious that teacher staff needs to be enabled to accomplish the
Training Teachers for E-Learning
required functions in an always evolving society. This implies that a large scope policy for teacher training is to be developed in order not merely to obtain a certain list of new skills, mostly those related to the use of ICT in learning situations. It actually involves that teachers require training to apply their skills to new problems, under new conditions. They should thus develop skills into competences, and, by doing so, integrate in their work ICT solutions that fit to the didactic objectives, renew teaching and learning methodology, facilitate communication, give place to creativity, and allow pupils to learn at their own pace. By doing so, they will be closer to the profile of a tutor online, as a practitioner that successfully takes advantages of the virtual environments for collaborative work and learning communication.
It is of particular interest the study of learning outcomes in teachers training courses developed online. The analysis of their implementation and the research of the issues that guarantee the quality of their development would offer valuable guidance.
Future reseArch dIrectIons
reFerences
The study of a single case only allows an outline of the relationship between learning design and learning outcomes. The evaluation of other teacher in service training courses should take in consideration the issues presented in this chapter, in order to establish some conclusions about the following trend topics: •
•
•
The supposed instructional benefits of a blended learning course compared with authentically online training courses. This involves a study of the online learning design that should underlie the implementation of this kind of course in order to determine in which conditions a blended or an online design should be desirable. The integration of ICT in the daily work of teachers. Where do the barriers that embarrass the use of ICT in teaching lie? How it is possible to promote ICT integration through teaching training? Collaborative work and learning. The socalled Web 2.0 makes it possible to broaden
•
the learning activities beyond classroom walls and to allow people to work together within a collaborative framework. The development of new roles for teachers and learners and the way the different instructional designs sustain it when adapted to the working context. This involves setting the rules of flexible learning in order to avoid an excess that could drive courses to chaotic random learning.
Barberà, E., & Badia, A. (2004). Educar con aulas virtuales. Orientaciones para la innovación en el proceso de enseñanza y aprendizaje. Barcelona: A. Machado Libros. Barnes, K., Marateo, R., & Ferris, S. (2007). Teaching and learning with the net generation. Innovate, 3(4). Retrieved October 18, 2007, from http://www.innovateonline.info/index. php?view=article&id=382 Bernabé, A. (2004). Blended learning. Conceptos básicos. Pixel-Bit. Revista de Medios y Educación, 23, 7-20. Retrieved October 18, 2007, from www. lmi.ub.es/personal/bartolome/articuloshtml/04_ blended_learning/documentacion/1_bartolome. pdf Cabero, J. (2004). Bases pedagógicas del eLearning. Revista de Universidad y Sociedad del conocimiento, 3. Retreived October 18, 2007, from http://www.uoc.edu/rusc
Training Teachers for E-Learning
Diaz, D., & Cartnal, R. (2006). Term length as an indicator of attrition in online learning. Innovate, 2 (5). Retrieved October 18, 2007, from http://www.innovateonline.info/index. php?view=article&id=196 Downes, S. (2005, October 17). E-learning 2.0. Elearn Magazine. Retrieved October 18, 2007, from http://elearnmag.org/subpage.cfm?section =articles&article=29-1 Driscoll, M. (2002). Blended learning: Let’s get beyond the hype. Learning and Training Innovations Newsline. Retrieved October 18, 2007, from http://www.ltimagazine.com/ltimagazine/article/ articleDetail.jsp?id=11755 European Commission. (2005). Common European principles for teacher competences and qualifications. Retrieved October 18, 2007, from http://europa.eu.int/comm/education/policies/2010/doc/principles_en.pdf European Schoolnet. (2005, July 15). Insight special report on assessment schemes for teachers’ ICT competence—A policy analysis. Retrieved October 18, 2007, from http://www.eLearningeuropa.info/index.php?page=doc&doc_ id=6578&doclng=6 García Aretio, L. (2003). Comunidades de aprendizaje en entornos virtuales. La comunidad iberoamericana de la CUED. In M. Barajas (Ed.), La tecnología educativa en la enseñanza superior. Madrid, Spain: McGrawHill.
Retrieved October 18, 2007, from http://www.eLearningeuropa.info/index.php?page=doc&doc_ id=6494&doclng=7&menuzone=1 Jennings, D. (2005). E-learning 2.0, whatever that is. Retrieved October 18, 2007, from http://alchemi. co.uk/archives/ele/e-Learning_20_wh.html Karrer, T. (2006, February 10). What is e-learning 2.0. E-Learning Technology. Retrieved October 18, 2007, from http://e-Learningtech.blogspot. com/2006/02/what-is-e-Learning-20.html Kearsley, G. (2000). Online education: Learning and teaching in cyberspace. Belmont, CA: Wadsworth. Oliver, M., & Trigwell, K. (2005). Can blended learning be redeemed? E-Learning, 2(1). Retrieved October 18, 2007, from http://www. wwwords.co.uk/pdf/viewpdf.asp?j=elea&vol=2 &issue=1&year=2005&article=3_Oliver_ELEA_ 2_1_web&id=83.104.158.140 O’Reilly, T. (2005). Web 2.0: Compact definition? Retrieved October 18, 2007, from http://radar. oreilly.com/archives/2005/10/web_20_compact_definition.html Paloff, R.M., & Pratt, K. (1999). Building communities in cyberspace: Effective strategies for the online classroom. San Francisco: Jossey-Bass. Prensky, M. (2001). Digital natives digital immigrants. On the Horizon NCB University Press, 9(5).
Gifford, B.R., & Enyedy, N. (1999). Activity centered design: Towards a theoretical framework for CSCL. In Proceedings of the Third International Conference on Computer Support for Collaborative Learning. Retrieved October 18, 2007, from http://www.gseis.ucla.edu/faculty/enyedy/pubs/ Gifford&Enyedy_CSCL2000.pdf
Recommendation of the European Parliament and the Council of 18 December 2006 on Key Competences for Lifelong Learning. (2006, December 12). Official Journal of the European Union Retrieved October 18, 2007, from http://www. cimo.fi/dman/Document.phx/~public/Sokrates/ Comenius/keycompetences06.pdf
Hudson, B. (2005). Conditions for achieving communication, interaction and collaboration in e-learning environments. E-Learningeuropa.info.
Reeves, W. (1999). Learner-centered design: A cognitive view of managing complexity in product, information, and environmental design. Thousand Oaks, CA: Sage.
Training Teachers for E-Learning
Seoane Pardo, A.M , & García Peñalvo, F.J. (in press). Tutoring & mentoring online. Definition, roles, skills and case studies. In G.D. Putnik & M.M. Cunha (Eds.), Encyclopedia of networked and virtual organizations. Hershey, PA: Idea Group Inc.
Bonk, C.J., & Graham, C.R. (2005). Handbook of blended learning: Global perspectives, local designs. San Francisco: Pfeiffer Publishing. Retrieved October 18, 2007, from http://www. uab.edu/it/instructional/technology/docs/blended_learning_systems.pdf
Stockley, D. (2003). E-learning definition. Retrieved October 18, 2007, from http://derekstockley.com.au/elearning-definition.html
Cabero, J. (2004). Bases pedagógicas del eLearning. Revista de Universidad y Sociedad del conocimiento, 3. Retreived October 18, 2007, from http://www.uoc.edu/rusc
Valiathan, P. (2002). Blended learning models. Learning Circuits. Retrieved October 18, 2007, from http://www.learningcircuits.org/2002/ aug2002/valiathan.html
Dillenbourg, P. (1999). Collaborative learning. Cognitive and computational approaches. New York: Pergamon Earli.
Vinicini, P. (2001). The use of participatory design methods in a learner-centered design process. ITFORUM 54. Retrieved October 18, 2007, from http://it.coe.uga.edu/itforum/paper54/paper54. html
Educational Testing Service. (2006). ICT literacy assessment preliminary findings. Retrieved October 18, 2007, from http://www.ets.org/Media/Products/ICT_Literacy/pdf/2006_Preliminary_Findings.pdf
Wenger, E. (1998). Communities of practice learning, meaning, and identity. Cambridge University Press.
European Commission. (2006, September 29). Benchmarking access and use of ICT in European schools 2006. Retrieved October 18, 2007, from http://ec.europa.eu/information_society/eeurope/ i2010/docs/studies/final_report_3.pdf
Wilson, B.G. (1995). Situated instructional design: Blurring the distinctions between theory and practice, design and implementation, curriculum and instruction. In M. Simonson (Ed.), Proceedings of selected research and development presentations. Washington, DC: Association for Educational Communications and Technology. Retrieved October 18, 2007, from http://carbon.cudenver. edu/~bwilson/sitid.html
AddItIonAL reAdIng Berge, Z., Collins, M., & Dougherty, K. (2000). Design guidelines for Web-based courses. In B. Abbey (Ed.), Instructional and cognitive impacts of Web-based education (pp. 32-40). Hershey, PA: Idea Group Publishing.
Inaba, A., Ikeda, M., & Mizoguchi, R. (2003). What learning patterns are effective for a learner’s growth? In U. Hope, F. Verdejo, & J. Kay (Eds.), Artificial intelligence in education: Shaping the future of learning through intelligent technologies (AIED2003) (pp. 219-226). Sydney, Australia. Jonassen, D.Y., & Rorher-Murphy, L. (1999). Activity theory as a framework for designing constructivist learning environments. Educational Technology: Research and Development, 46(1). Keegan, D. (1988). Theories of distance education: Introduction. In D. Sewart, D. Keegan & B. Holmberg (Eds.), Distance education: International perspectives (pp. 63-67). New York: Routledge. Oblinger, D.G., & Oblinger, J.L. (Eds.). (2005). Educating the net generation. Washington, DC: EDUCAUSE. Retrieved October 18, 2007, from
Training Teachers for E-Learning
http://www.educause.edu/books/educatingthenetgen/5989 Parchoma, G. (2003). Learner-centered instructional design and development: Two examples of success. Journal of Distance Education, 18(2). Parchoma, G. (2005). Roles and relationships in virtual environments: A model for adult distance educators extrapolated from leadership in experiences in virtual organizations. International Journal on E-Learning, 4(4).
Siemens, G. (2005). Connectivism: A learning theory for the digital age. International Journal of Instructional Technology and Distance Learning, 2(1). Tan, S.C., Hu, C., Wong, S.K., & Wettasinghe, C.M., (2003). Teacher training on technology-enhanced instruction—A holistic approach. Educational Technology & Society, 6(1), 96-104. Vygotsky, L. (l978). Mind and society. Cambridge, MA: Harvard University.
Chapter VI
The Role of Institutional Factors in the Formation of E-Learning Practices Ruth Halperin London School of Economics, UK
ABstrAct This chapter explores institutional and socio-organisational factors that influence the adoption and use of learning management systems (LMS) in the context of higher education. It relies on a longitudinal case study to demonstrate the ways in which a set of institutional and organisational factors were drawn into the formation and shaping of e-learning practices. Factors found to figure predominantly include institutional conventions and standards, pre-existing activities and routines, existing resources available to the institution, and, finally, the institution’s organisational culture. The analysis further shows that socio-organisational factors may influence e-learning implementation in various ways, as they both facilitate and hinder the adoption of technology and its consequent use. It is argued that institutional parameters have particular relevance in the context of hybrid modes of e-learning implementation, as they illuminate the tensions involved in integrating technological innovation into an established system.
PArt I: BAckground IntroductIon This chapter focuses on the institutional and socio-organisational factors that influence the use of learning management systems (LMS) in the context of higher education. Drawing on a
longitudinal case study in an academic setting, the chapter reveals the vital role of institutional concerns for understanding learning technology use and its consequences. By exploring institutional and organisational factors in e-learning, this study addresses a definite gap in the literature to date. As shown in a literature review, various factors that may facili-
Copyright © 2008, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
The Role of Institutional Factors in the Formation of E-Learning Practices
tate or hinder the effective use and integration of learning technology have been explored. These include technical factors such as availability, stability, and reliability, factors associated with instructional design, and, to a large extent, user related factors, namely attitudes and perceptions. Yet, these factors are typically studied in isolation and socio-organisational factors are effectively ignored. The significance of studying institutional factors stems not only from the potential role they are likely to play, and have repeatedly been shown to play in the context of information systems other than e-learning, but more crucially in the light of the prevailing mode of hybrid (or blended) e-learning. Within hybrid models of integration, the role played by the pre-existing institutional context becomes all the more important, as the technological environment is meant to complement, rather than replace, the existing and long established learning system. Findings presented in this chapter demonstrate the ways in which a set of institutional and organisational factors were drawn into the formation and shaping of e-learning practices, defined as the shared and recurrent activities that emerge from learners’ continuous interaction with learning technology. The analysis further shows that socio-organisational factors may influence e-learning implementation in various ways, as they both facilitate and hinder the technology adoption and its consequent use. The case study reported in this chapter involves the use of a standard LMS in a traditional, wellestablished university in the UK. Focus is placed on the integration of the LMS into the provision of a masters degree in a faculty of social science. Data collection encompassed three consecutive years, starting from the point at which the technology was first introduced in the institution. A research design was devised so as to guide a systematic examination of the organisational context. Relevant institutional levels were mapped out and analysed as interconnected layers (Pettigrew, 1990).
At the core of this chapter is a set of institutional and socio-organisational factors impinging on e-learning which will be seen to arise from the case analysis. Factors found to figure predominantly include: (a) institutional conventions and standards, (b) institutional activities and routines, (c) organisational resources (physical, technological—other than LMS—and human), and, (d) organisational culture and social relations. After introducing the factors and demonstrating their role in the formation of e-learning practices, a discussion of their implications follows. It is argued that these parameters have particular relevance in the context of hybrid modes of elearning implementation, as they illuminate the tensions involved in integrating technological innovation into an established system. It will be shown that in cases where technology was introduced to supplement existing arrangements, that is, to compensate for deficiencies affecting the existing “off-line” setting, the integration process was typically vigorous and accelerated. Clearly, difficulties and challenges also arose as the LMS was seen to compete or clash with its veteran offline counterpart. In some cases, interoperation and fusion were achieved through negotiation; in others, technological properties were ruled out and capabilities remained unexploited. The next part of the chapter provides a review of the literature on factors influencing the use of learning technology. Although considerable research on the topic has been undertaken, findings on the institutional and socio-organisational factors are strikingly absent. The aim of the present study is to address this gap in the e-learning literature.
FActors InFLuencIng the use oF LeArnIng technoLogy Various factors that may facilitate or hinder the effective use and integration of learning technologies have been studied and are briefly reviewed
The Role of Institutional Factors in the Formation of E-Learning Practices
in the following sections. A general overview is provided first, before attention is drawn to research on user perspectives, representing the most frequently studied parameter.
overview of Factors Studies Some attention has been paid to strategic considerations associated with the implementation of learning technology (LT). For example, BoydBarret (2000) has examined six different models of universities implementing LT and identified three primary institutional and political characteristics that have critical influence on distance learning outcomes. These include private or public emphasis, degree of dedication to online learning, and holistic or incremental strategy. In addition, three secondary dimensions are considered: technology mix, financial production models, and target markets (Boyd-Barret, 2000). Williams (2003) has identified and rated organisational roles and competencies needed for successful deployment of e-learning programmes in higher education institutions. Implications for staff development and training are discussed at the managerial level. Technical factors commonly addressed in the literature include availability and access (Chiero, 1997; Tu, 2000) as well as reliability and stability of the technology in use (Webster & Hackley, 1997). Hara and Kling (2000) provided a systematic analysis of students’ distressing experience in online learning. They have demonstrated how technical difficulties and communication breakdowns emerged as significant factors that actually impede learning. In a recent study, the role of technical support in e-learning has been demonstrated (Ngai & Poon, 2007). The instructional design of the technological environment is frequently cited as a major factor affecting the ways in which learning technologies are adopted and used (Penuel & Roschelle, 1999). Information structures (Potelle & Rouet, 2003) and the nature of online learning tasks are but a
few of the factors explored. It should be noted, however, that instructional design of learning technology emerges as a topic in its own right and so a comprehensive review of the subject is well beyond the scope of this chapter. The instructional style and, in particular, the role played by teachers/instructors in e-learning environments is considered a key factor affecting learning interactions online (Guldberg & Pilkington, 2007). The primary role of the e-moderator in facilitating an environment for effective learning to occur is frequently advocated (Salmon, 2000). However, this approach has been recently criticised by Oliver and Shaw (2003) as a kind of pedagogical determinism. In their study, the tutor’s enthusiasm and expertise are viewed as the major factors stimulating student engagement in asynchronous discussions. Mazzolini and Maddison (2003) have found that different roles taken by online instructors can influence students’ participation and perceptions but not always in expected ways. They conclude that the rates at which instructors participate are not simple indicators of the quality of online discussion and more subtle measures of the effectiveness of asynchronous discussion forums for learning and teaching are warranted.
User Perspectives A review of the e-learning literature clearly suggests that the factors most frequently studied are those related to users’ perspectives on technology-mediated learning (Kerr and Rynearson, 2006). Research in this area focuses on attitudes towards the application of ICT in learning and on perceptions, opinions, and preferences regarding learning technology. Within studies of students’ perspectives on e-learning, perceptual and attitudinal variables are used to measure the effectiveness of the technology (Phipps & Merisotis, 1999) and as indicators of learning outcomes. For example, Webster and Hackly (1997) write “we suggest that attitudes towards a technology,
The Role of Institutional Factors in the Formation of E-Learning Practices
the perceived usefulness of the technology, and attitudes towards distance learning should be included as important learning outcomes” (p. 1284). Waxman, Lin, and Michko (2003) suggest measuring perceptions and attitudes as indicators for affective outcomes and distinguish them from both cognitive and behavioural outcomes. In other studies, however, the perspectives of the users are taken to represent factors that may bear upon the consequent adoption and use of the technology. For instance, student attitudes are considered a prominent motivational factor in learning; therefore, positive attitudes may often accompany effective learning (Ayersman, 1996). The first approach seems to conflate outcomes and perceptions and in so doing blurs the distinction between learning outcomes and their potential cause. The second approach appears more coherent in so far as this distinction is concerned. This body of the literature will now be reviewed. User attitudes are seen as influencing not only the initial acceptance of IT but also the future behaviour regarding the use of computers. Thus, student attitudes towards technology form a fundamental basis for both participation and subsequent achievements in e-learning (Liaw, 2002; Selwyn, 1999). In measuring and assessing attitudes, different studies have applied the computer attitude scale (CAS). The CAS (Selwyn, 1999), based on Davis’ (1989) technology acceptance model (TAM), consists of four subscales: anxiety related to using computers, perceived control when using a computer, perceived usefulness of using the computer, and behavioural attitudes towards using a computer. This model has been applied in various studies of learning technology users (e.g., Dusik, 1998; Selim, 2003). Mitra and Steffensmeier (2000) examined pedagogic usefulness of the computer by focusing on student attitudes. Categories of attitude included: user comfort with computers, apprehension regarding the use of computers, the effect of online learning on communication with instructors, general preferences for e-learning, the effect of e-learning on workload
in learning, and whether learning is easier in online environments. The results indicate that a computer-enriched environment is positively correlated with student attitudes toward computers in general, their role in teaching and learning, and their ability to facilitate communication. Overall, research has reported that students hold positive attitudes towards the application of ICT in learning (Phipps & Motistis, 1999). Favourable attitudes have been found across many student populations, at all levels of education and training, and across different cultures (Mitra, 1997, 2000; Sanders & Morrison-Shetlar, 2001; Selwyn, 1999). Previous research has highlighted a range of factors influencing user attitudes towards computers and e-learning. Personal factors affecting attitudes such as self-efficacy (Dusick, 1998; Liaw, 2002) and demographic characteristics were explored in relation to user attitudes towards LT (Selwyn, 1999). Although overall attitudes were found to be consistently favourable, research into the factors influencing them has reported mixed findings (Sanders & Morrison-Shetlar, 2001). User perceptions regarding learning technology have been explored in numerous studies. Drawing on Rogers’ Diffusion of Innovation Theory (Rogers, 1995), Omalley and McCraw (1999) explored user perceived effectiveness of online learning. In their analysis, facets of perceived characteristics of e-learning included relative advantage, course and student compatibility, grades, and schedule. Research findings indicate that students perceive that online learning has a significant relative advantage compared to traditional methods. These advantages include saving them time, fitting in better with their schedules, and enabling them to take more courses. However, students do not believe that they learn more in online learning courses. Interestingly, students seem to be ambivalent when comparing online to traditional methodologies. They prefer traditional courses although they want more online courses (Omalley & McCraw, 1999). Another study investigating students’ perspectives on TML has
The Role of Institutional Factors in the Formation of E-Learning Practices
suggested that although the majority of students taking traditional courses favour online courses, they are less likely to enrol in them. However, the majority of students taking online courses find that such courses meet their academic needs and improve their technological skills (Leonard & Guha, 2001). In a study on the effect of students’ perceptions of their receptivity towards TML, a “distance learning receptivity model” was examined (Christensen, Anakwe, & Kessler, 2001). In addition to overall attitudes towards LT and various demographic characteristics and technology perceptions (perceived usefulness, technological familiarity and technological accessibility), other perceived categories were explored including reputation (of the lecturers involved, of the programme and of the school), constraints (e.g., commuting time, work demands, family responsibilities), and learning preferences perception (towards traditional learning). The results reveal significant relationships between many of these variables and LT receptivity. Findings also indicate that some traditionally held assumptions, for example those regarding accessibility, reputation, and constraints, may not be valid in the new high-tech learning environment (Christensen et al., 2001). Research exploring opinions shared by students on issues concerning the application of technology to course instruction resulted in an opinion typology. Three opinion types were identified: (1) time and structure in learning (i.e., flexible time management that requires self-discipline), (2) social interaction in learning (i.e., individual work leading to less enrichment from others) and (3) convenience (i.e., commuting factors—time and cost—less interference with work) (Valenta, Theriault, Dieter, & Mrtek 2001). Similar results were recently reported by Song, Singleton, Hill, and Koh (2004), indicating that the main factors perceived by students as influencing successful online learning include time management and perceived lack of sense of community.
00
summary and critical remarks A growing body of research has concentrated on factors that enhance or inhibit the adoption and use of learning technology. Some studies have focused on factors such as the strategic approach of the university towards online learning (BoydBarret, 2000) or the development of appropriate competencies and roles within the institution (Williams, 2003). The factors of technology availability and access were considered as were technical stability and reliability (Webster & Hackly, 1997). The inhibiting impact of technical difficulties and communication breakdown in using learning technology was highlighted (Hara & Kling, 2000). Instructional factors appear to be fundamental. Design issues and the role of the instructor are considered critical factors influencing students’ participation and engagement in technology-mediated learning (Tu, 2000). The most frequently studied factors relate to students’ perceptions on learning technology. A more detailed review was therefore provided of the attitudinal and perceptual factors studied to date. A critique of the literature on “user perspectives,” however, concerns the tendency to study perceptions and beliefs as isolated constructs, detached from action. Studies of users’ perceptions seem to imply a straightforward, causal relationship between perceptions (e.g., assumptions about the technology) and action (i.e., actual use of the technology). Perceptions are therefore examined and measured within and among themselves. Yet the ways in which perceptions serve to guide people’s actions may otherwise be viewed as more complex and thorny. For example, Picciano (2002) points out that much of the literature is based on students’ perceptions of the quality and quantity of their interaction and performance. He suggests going beyond student perceptions to explore actual interaction and performance. Findings indicate that while positive relationships between perceptions of interaction and perceived performance persist, the relationship between actual interac-
The Role of Institutional Factors in the Formation of E-Learning Practices
tion (defined by actual postings on discussion boards) and actual performance measurements (designed to measure specific course objectives) are mixed and inconsistent (Picciano, 2002). It remains the case, however, that despite obvious difficulties, the most frequent research regarding students is the assessment of their attitudes and perceptions towards e-learning (Nachmias, 2002). Nachmias proposes that “it may well be that the ease of data collection regarding these variables is what gives them the broad attention of the research community” (p. 219). Furthermore, the tendency to explore user perceptions and beliefs as standalone, independent constructs circumvents the potential role played by other sources of influence and represents a decontextualised notion of the user (Lamb & Kling, 2003). While opinions and beliefs held by users may well guide their choices to act, these are influenced by and dependent upon conditions and circumstances other than individual perceptions. Technological and socio-organisational properties are key elements that are similarly and interdependently drawn upon in continuous uses of technology (Orlikowski, 2000). In summation, the literature overview provided above demonstrates that a variety of factors influencing the adoption and use of learning technology have been explored, with attention mostly given to those of user attitudes and perceptions. Yet, these factors are typically studied in isolation and contextual factors associated with the institutional setting of e-learning are effectively ignored. It is the aim of this study to address this gap in the e-learning literature. In the section that follows, a case is made for the significance of contextual factors, so that a better understanding of learning technology adoption and use can be achieved. In particular, attention is paid to the institutional context of higher education as the backdrop of the empirical findings presented in the subsequent part of this chapter.
the sIgnIFIcAnce oF InstItutIonAL context In e-LeArnIng reseArch The significance of contextual factors in the adoption and use of information and communication technologies (ICTs), although neglected in the e-learning discourse, has been demonstrated widely in the case of other information systems (Avgerou, 2001; Avgerou & Madon, 2004). Various studies have repeatedly shown how similar technologies yielded different results in difference organisations, thus illuminating the crucial role played by contextual particularities in shaping the use of technology and its consequences (Robey & Bourdreau, 1999). The significance of studying institutional factors in e-learning stems not only from the role they are likely to play, as shown in implementation cases of other ICTs, but more crucially in the light of the prevailing mode of hybrid elearning. “Hybrid” (Cookson, 2002) or “blended” (Ginns & Ellis, 2007) modes of implementation, refers to learning technology integration into traditional on-campus education. Since hybrid implementation works within the physical environment of the university, and since learning technology is meant to complement, rather than to replace the existing system, the role played by the pre-existing institutional context becomes all the more important. Furthermore, many of the controversial issues surrounding the highly researched topic of distance learning become less critical. Different opportunities, challenges, and concerns are brought to the fore, calling for a new research agenda (Nachmias, 2002). Although some initial attempts can be cited (McDonald & Mcateer, 2003; Wu & Hiltz, 2004), hybrid e-learning largely remains an under-researched phenomenon requiring further exploration. The importance of research in this area is highlighted by the growing pervasiveness and anticipated growth of this integrated mode across the higher
0
The Role of Institutional Factors in the Formation of E-Learning Practices
education sector (Allen & Seaman, 2004). As Garrison and Kanuka (2004, p. 104) conclude “it is essential that researchers begin to explore the impact of blended learning.” For exploring hybrid e-learning, the institution into which e-learning is introduced cannot be seen to represent a plain variable. Indeed, criticism has been aired against the over generalised and over standardised assumptions about the character of “universities” and “traditional learning” prevailing in the literature (Ehrmann, 1995; Saba, 1999). Ehrmann (1995) points to the mechanical conception underlying comparative studies of technology-based methods vs. traditional methods. These studies assume that higher education operates like a machine, and that each college is a slightly different version of the same ideal machine. The phrase “traditional methods” is used to represent some widely practiced method that presumably has predictable, acceptable results. Yet “traditional methods” do not define the higher education that the research reveals. In fact, university learning is not so well-structured, uniform, or stable that one can simply compare an innovation against traditional processes. Ehrmann refers back to the term “organised anarchy” coined by Cohen and March (1974) to describe how higher education institutions function. A variety of inconsistent goals, unclear methods and processes, and uncertain organisational boundaries seem to capture both colleges and their courses (Ehrmann, 1995). The difficulty of talking about “universities” in general was likewise acknowledged by Brown and Duguid (1998), stating that “the menagerie has many beasts and several species” (p. 5). Both Saba (1999) and Ehrmann (1995) conclude that the search for global answers about the comparative effectiveness of technology is fundamentally useless. Discussion of broad concepts related to traditional education or face-to-face education are vague and do not help the current discourse to shed any light on the subject of learning. The apparent diversification of learning technology implementation in higher education
0
and the myriad hybrid models deployed, further point to the potential influence of particular socio-organisational elements on the actual use of the technology. It is therefore suggested that a systematic analysis of the context within which learning technology is implemented and used is essential for understanding both the processes and consequences of e-learning. The research reported in the next part of the chapter makes a step in this direction, seeking to increase understanding of contextual considerations involved in e-learning practices. It does so by eliciting a set of institutional and socio-organisational factors emerging from an empirical analysis of LMS implementation in the hybrid mode.
PArt II: FIndIngs And ImPLIcAtIons contextuAL FActors And e-LeArnIng PrActIces Research findings presented in the following sections demonstrate the ways in which a set of institutional factors were drawn in the formation and shaping of e-learning practices. In this context, e-learning practices are defined as the shared and recurrent activities that emerge from learners’ continuous interaction with learning technology. Findings and illustrations draw on a longitudinal study into the structuring of technologymediated learning practices in higher education (Halperin, 2005). The empirical setting of the study involved the use of a standard LMS, namely, WebCT™ in a “traditional,” well established university located in the heart of London, UK. Focus was placed on the integration efforts of the LMS into the provision of a masters degree in a faculty of social science. The student body used as the research sample included 127 students in total. The demographic features of the students reflected a fairly diverse collective. While some
The Role of Institutional Factors in the Formation of E-Learning Practices
similarities were apparent in terms of formal education acquired and, presumably, socio-economic background, the cultural and national diversity had a strong presence. As for gender distribution among students, two thirds of them were female and one third male. Several data collection tools were employed in the research, including in-depth interviews, informal conversations, documents, off-line observations, and online observations (through logons, tracking utilities, log files and compiled transcripts of computer-meditated communication [CMC] discussion messages). The use of various data collection tools concurs with Yin (1984) who advocates the use of multiple sources of information in conducting case studies. Data collection encompassed three consecutive years, starting from the point at which the technology was first introduced in the institution. A research design was devised so as to guide a systematic examination of the organisational context, relying on qualitative analysis methods. Relevant institutional levels were mapped out and analysed as interconnected layers (Pettigrew, 1990), and included the off-line course (the traditional, face-to-face elements of the taught course), the programme, the department, and the university. A set of institutional and socio-organisational factors impinging on e-learning have emerged from the case analysis. Factors found to figure predominantly include institutional conventions and standards, institutional activities and routines, organisational resources (physical, technological—other than LMS—and human), and organisational culture and social relations. These are discussed and illustrated in turn.
institutional Conventions and standards Findings indicate that pre-existing institutional conventions and associated procedures were drawn into the formation of e-learning practices. Systematic analysis of both formal and informal
institutional norms suggests that practices enacted online are anchored in, and conditioned by, long-established standards and regulations. An illustrative example concerns the way in which institutional conventions of assessment served to shape e-learning practices. First, the pre-existing assessment framework was thought of as a way to reinforce new practices and to endorse their integration into the learning process. Organisational “rules” were drawn upon in an attempt to strengthen and institutionalise the online practice. Consequent efforts to establish the practice through formal assessment required compliance with a set of institutional conventions and related procedures, which in turn served to structure the e-learning practice. In the case study institution, strong emphasis is placed on standardised and “objective” assessment. Thus, introducing assessment to online activities required the approval of a school-wide committee and the assurance of conformity with pre-set criteria thereby aligning online outputs with institutional conventions. Once the formal assessment of online activities was put into place, a new e-learning practice emerged. This distinct practice may be termed discourse and is differentiated from other knowledge sharing practices administered online such as information exchange, as discussed below. The practice of discourse relied on the discussion module of the LMS which was used to make public well developed statements regarding various topics studied in the course. Students had referred to this activity as a “mini-essay” and conceived it to be an individual output for assessment rather than an integral part of an online discussion which was the original idea behind it. As students commented in interviews: …at some point it became a series of statements just to prove how intelligent you were and it became very difficult to answer or reply to these statements. Sort of mini essays that people were posting …
0
The Role of Institutional Factors in the Formation of E-Learning Practices
…I noticed that people write those long spellchecked mini essays, which doesn’t really allow discussion. Thus, the institutionalisation of the practice by means of formal assessment meant that potentially innovative e-learning practice became conventional.
Pre-Existing activities and Routines Findings further suggest a strong linkage between pre-existing activities and routines enacted offline and the new online learning practices which emerged through the continuous use of the technology. There is evidence to suggest that distinct e-learning practices are intertwined within key components of the off-line course such as lectures and face to face seminars. One example involves an e-learning practice which may be termed knowledge presenting. Relying on the students presentation model of the LMS, the technology in this practice was used recurrently to present knowledge on given learning topics. Structured activities in this context included preparation and publishing of Web-based presentations pertaining to various themes studied in the course. From the beginning of the course, a weekly-based routine for publishing online presentations was set up through the LMS. Presentations were to be uploaded regularly by a given time: on the day before the lecture and seminars. A topic and a set of articles were given each week. Students were to provide a brief summary of the reading followed by questions and criticisms, using multimedia options in their presentations. The e-learning practice of knowledge presenting was thus designed to support off-line seminar discussions. It was the routines and conventions of the traditional seminar that gave rise to the practice and served to shape its pattern. Other facets of the online environment and its consequent exploitation were clearly rooted within pre-existing and long established organisational
0
routines. For example, the weekly routine of recurrent face-to-face lectures is reflected in the way in which information is organised within the system’s content module. While online content could otherwise be organised according to any chosen logic (e.g., vertical or thematic), a linear, week-by-week logic served as the organising principle underlying content release and presentation. Similarly, the temporal pattern of online interaction concurred with off-line time cycles in so far as “term time” and “vacation time” are concerned. The analysis clearly suggests that configuration of time in online practices was underpinned by the pre-existing temporal profile of the off-line activities.
organisational resources Characteristics of the organisational environment, specifically the nature of campus resources, were found to influence the adoption and use of the technology by both promoting and inhibiting integration efforts. Organisational resources in this context may be discussed in terms of physical, technological, and human resources. Physical resources and features of the material environment of the organisation emerged as prominent characteristics motivating the adoption of the technology. As mentioned earlier, the field organisation is located at the heart of a metropolis where real estate prices are remarkably high. Space is therefore a scarce resource and the campus is exceptionally crowded. Resources for students such as study rooms are limited and, in general, poorly maintained. Under these conditions, and given that no residence is available on campus, students typically preferred to work outside the university and tended to rely on remote access through the LMS. As more resources became available online, the use of the system increased, allowing students to move away from using campus facilities such as the physical library. This was especially evident in the enactment of individualproductivity practice that characterises e-learning
The Role of Institutional Factors in the Formation of E-Learning Practices
practice in which students used the LMS to obtain various resources and content related to the taught course. Through this practice, frustration with inconvenient work conditions could be relieved. Hence, the online learning environment is seen to compensate and complement poor and inefficient resources of the physical environment of the institution, and in so doing, its adoption and use is motivated and accelerated. Technological resources other than WebCT have been brought to bear on the learning technology adoption and its subsequent integration. More specifically, a number of systems were placed at the disposal of the students, such as mail servers, public folders, an online administration system, and a digital library. While each of these systems meets distinct requirements, when designed functionalities are compared with the LMS some similarities are evident. For example, an e-mail application is provided as a module within WebCT. Yet this component was never exploited, as all users opted for the dedicated exchange server of the organisation. Although WebCT suggests itself as a definitive, all-purpose learning environment, comparable technologies implemented in the organisation appear to delimit its role by providing viable alternatives. The fact that other, parallel systems were used in the organisation explains why certain modules of the LMS were disregarded. It also sheds light on some antagonistic attitudes towards the technology since students were required to learn and to manage several systems at once. As this burden appeared only partly justified; consequent issues of motivation become unavoidable. Being a leading academic institution located in a capital city, the case study institution hosts highly valued human resources. The university attracts renowned scholars and high profile individuals from the social and political arena to give talks and to take part in public debates. The intellectually-charged atmosphere of the campus and the opportunity it provides to participate in a range of events was acknowledged by the students
as a major benefit. As one student commented in an interview: “people like Naomi Klein wouldn’t come to Hamburg.” Hence, the unique opportunity to participate in non-mediated interactions on campus hindered students’ motivation and interest in online interaction. For similar reasons, students did not exploit specific online resources such as recorded talks of guest lecturers. In this respect, the resources and opportunities offered on campus are seen to compete with their online counterparts and raise questions about the added value provided by the technology within the particular organisational context.
organisational cultural and social relations Other contextual characteristics influencing the adoption and use of the technology stem from socio-cultural features of the organisation. A prominent feature of the institution studied concerns the national and cultural diversity of its student body. According to the graduate school prospectus, the institution has attracted students from 130 countries worldwide. As the director states (graduate school prospectus, p. 8) “the [school] is global in outlook and cosmopolitan in character.” Figure 1 indicates the distribution of graduate students by domicile. The cosmopolitan character of the student body was drawn on in the learning practice and stimulated specific activities, as is evident in the case of an e-learning practice termed information exchange. In this practice, the discussion module of the LMS was used recurrently to exchange information about learning content. In particular, students exchanged relevant information concerning data of their own countries. In this way their knowledge of different languages and familiarity with different national contexts served substantive aspects of the course and gave rise to a structured learning activity online. Yet, the cultural diversity has also hindered motivation for online interaction among students
0
The Role of Institutional Factors in the Formation of E-Learning Practices
Figure 1. Graduate students by domicile
1% 4% 15% 33%
Australia south America
2%
north America Africa
17%
Asia uk other europe
28%
who expressed greater interest in non-mediated interaction. As indicated above, the opportunity for collocated interaction on campus appeared compelling, valuable and unique. As one student explained: The thing that makes this programme good is actually the people. What I like most is the fact that everyone in my class has such different perspectives. They come from different backgrounds and bring different ideas into the discussion…. I sit there and I’m this capitalist American and XXX[name of student] from Ukraine disagrees with me…but now the perspective she represents becomes real…and it makes me much less dismissive than I used to be, which is good!...engaging like this with people that have experienced…things that they lived through…so it’s not anymore something theoretical that is out there, it is something that is right in front of you…so you take it more seriously … There is evidence to suggest that students were inclined towards face-to-face interaction over remote, online interaction, especially given the opportunity to engage with a mixed and diverse community of peers in a collocated setting. The motivation to participate in online interaction
0
was further challenged by the social relationships which have developed among the students on the programme. Students of the masters programme under investigation have managed to create a vibrant social life. In contrast to other programmes of the same department, they remained a relatively small group in terms of student numbers and distinct in that students were to leave together for another year of study in the USA as the programme involved collaboration of two universities (UK and the U.S.). These features of the programme may explain the formation of more tightly coupled social relationships among students. Given the frequent occasions for off-line interaction, online communication seemed redundant and its relevance was at times contested. Several students have commented that the technology “feels like an artificial construct” and that “it creates distance where there is none.” While students had frequently challenged the benefits of asynchronous communication, its inherent ability to overcome time constraints still made it seem a valuable means of interaction. Synchronous CMC, on the other hand, although technically provided within specific WebCT modules, was entirely ignored. Bearing in mind the social circumstances, the opportunity and the preference for collocated interaction, medi-
The Role of Institutional Factors in the Formation of E-Learning Practices
ated interaction in real time appeared all the more redundant and artificial. For these reasons, synchronous CMC options remained unexploited and removed from the learning practice. Finally, the influence of the learning culture of the organisation prevailed in specific e-learning practices. In particular, the individualistic culture and the competitive atmosphere of the institution were reflected in individual-productivity and in discourse practices as described above. In these cases, the use of the technology served to support individual efforts and achievements and coincided with the dominant learning culture of the organisation. This culture manifests itself in the perception of students and lecturers, and is embodied in formal institutional documents as shown below: At XXX [name of institution], we believe you should be largely responsible for organising your own work and meeting the requirements of the programme. Although support with your studies is always at hand if required, a strong emphasis is placed on self-reliance. You will spend the majority of your time on your own work rather than with formal instruction. There is sufficient time in your schedule for reading and reflection. (Graduate Prospectus, p. 8) Although collaborative e-learning practices have emerged, these were typically associated with sharing and exchange rather than with team efforts or collaborative tasks which remained, by and large, individual-based.
concLusIon The aim of the study reported in this chapter was to explore institutional and socio-organisational factors involved in the adoption and use of learning technology in higher education. In so doing, the study attempted to address a perceptible gap in the current e-learning literature.
Findings arising from the study indicate that the adoption and use of learning technology is strongly influenced by the socio-organisational environment surrounding it. More specifically, institutional factors are shown to play a vital role in the formation and shaping of e-learning practices within the context of LMS use in higher education. Organisational factors found to figure predominantly include institutional conventions and standards, pre-existing activities and routines, existing resources available to the institution, and finally, the institution’s organisational culture. These factors have particular relevance in the context of hybrid modes of e-learning implementation, as they illuminate the tensions involved in integrating technological innovation into an established system. Further analysis of the emerging socio-organisational parameters, demonstrates the ways in which these factors can both promote and inhibit the integration of LMS. The important role played by an array of organisational properties denotes the institutional embeddedness of the e-learning practice. Since institutions of higher education can and do exhibit diversity in terms of their socio-organisational characteristics (Ermann, 1995), efforts to integrate learning technology across academic organisations should expect to encounter inconsistent and contradictory consequences (Robey & Bourdreau, 1999). Within hybrid models of e-learning implementation, the role played by pre-existing rules and resources becomes all the more significant as the technological environment is meant to complement, rather than replace, the existing and long established learning system. The analysis suggests that, in cases where technology was introduced to supplement existing arrangements, that is, to compensate for deficiencies affecting the existing off-line setting, the integration process was typically vigorous and accelerated. Clearly, difficulties and challenges also arose as the online system was seen to compete or clash with its veteran off-line
0
The Role of Institutional Factors in the Formation of E-Learning Practices
counterpart. In some cases, interoperation and fusion were achieved through negotiation; in others, technological properties were ruled out and capabilities remained unexploited. Understanding the ways in which socio-organisational factors impinge on LMS use bears practical implications. Rather than be driven by the technological capabilities and the features available, LMS implementation efforts and course design should take account of the contextual particularities associated with the educational institution in question. Particular attention should be paid to the strengths and weaknesses of the organisation as they may be viewed from the students’ point of view. This is especially relevant in hybrid e-learning projects, where the advantages of both learning systems—online and off-line—ought to be realised. As the case study illustrates, the implementation process of the LMS was accelerated when it compensated for deficiencies apparent in the physical system on campus. This was evident in, for example, the case of space and physical resources available for the students. At the same time, conflict between the systems is expected if the LMS competes with the perceived strength of the “off-line” learning environment. Of main concern here is the tension arising between virtual and collocated interaction, which directs course design efforts to offer complementary online and face-to-face communication in the learning practice. Evident in this hybrid e-learning case study is the dominant influence of the “traditional,” off-line learning system and its methods on the new online practices. An illustrative example discussed in this chapter is the role played by traditional assessment in structuring e-learning practices. While established, traditional methods may prove powerful in reinforcing the use of the technology, they might also stand in the way of innovative learning practice and, in so doing, undermine the original aim of implementing the technology.
0
dIrectIons For Further reseArch This chapter has presented preliminary findings of an exploratory research study into the institutional factors involved in e-learning. The lack of previous research on this topic, and hence, the exploratory nature of the study, suggest that further research is called for in order to achieve a more rounded understanding of the role played by institutional factors in the formation of e-learning practices. In particular, further research may enable validation of the results across cases. While the institutional factors identified stemmed from a longitudinal in-depth analysis, a single case study design was applied thus suggesting limited generalisability. Further research may also discover institutional factors in addition to the ones reported here, and in so doing extend the knowledge on this apparently important topic. While any mode of e-learning application is embedded within some institution or broader systems surrounding it, the relationship between the existing institutions, or the off-line environment, with the online learning environment is particularly relevant in the prevailing mode of hybrid or blended e-learning. This is so because, in b-learning, traditional off-line learning and innovative online learning are deliberately mixed with one another. To better understand this crucial relationship, the study described in this chapter has sought to address the ways in which pre-existing institutional factors influence emerging online practices. Yet, further research is warranted on the reverse affects, that is, on the ways in which online learning practices influence off-line practices and traditional routines in learning.
reFerences Allen, I. E., & Seaman, J. (2004). Sizing the opportunity: The quality and extent of online education in the US, 2002 and 2003. Needham, MA: Sloan-C.
The Role of Institutional Factors in the Formation of E-Learning Practices
Avgerou, C. (2001). The significance of context in information systems and organizational change. Information Systems Journal, 11, 43-63. Avgerou, C., & Madon, S. (2004). Framing IS studies: Understanding the social context of IS innovation. In C. Avgerou, C. U. Cibbora & F. F. Land (Eds.), The social study of ICT (pp. 162-182). Oxford: Oxford University Press.
Dusick, D. (1998). What social cognitive factors influence faculty members’ use of computers for teaching. A literature review. Journal of Research on Computing in Education, 31(2), 123-137. Ehrmann, S. C. (1995). Asking the right question: What does research tell us about technology and higher learning? Change, 17(2), 20-27.
Ayersman, D. J. (1996). Reviewing the research on hypermedia-based learning. Journal of Research on Computing in Education, 28(4), 501-525.
Garrison, D. R., & Kanuka, H. (2004). Blended learning: Uncovering its transformative potential in higher education. Internet and Higher Education, 7(2), 95-105.
Boyd-Barrett, O. (2000). Distance education provision by universities: How institutional context affect choices. Information Communication & Society, 3(4), 474-493.
Ginns, P., & Ellis, R. (2007). Quality in blended learning: Exploring the relationships between online and face-to-face teaching and learning. Internet and Higher Education, 10(1), 53-64.
Brown, J. S., & Duguid, P. (1998). Universities in the digital age. In B. L. Hawkins & P. Battin (Eds.), The mirage of continuity: Reconfiguring academic information resources for the 21st century (pp. 39-60). Washington, DC: Council on Library and Information Resources.
Guldberg, K., & Pilkington, R. (2007). Tutor roles in facilitating reflection on practice through online discussion. Educational Technology and Society, 10(1), 61-72.
Chiero, T. C. (1997). Teachers’ perspectives on factors that affect computer use. Journal of Research on Computing in Education, 30(2), 133-145. Christensen, E. W., Anakwe, U. P., & Kessler, E. H. (2001). Receptivity to distance learning: The effect of technology, reputation, constraints, and learning preferences. Journal of Research on Computing in Education, 33(3), 263-370. Cohen, M. D., & March, J. D. (1974). Leadership and ambiguity: The American college president. New York: McGraw-Hill. Cookson, P. (2002). The hybridization of higher education. International Review of Research in Open and Distance Learning, 2(2), 1-4. Davis, F. D. (1989). Perceived usefulness, perceived ease of use and user acceptance of information technology. Management Information Systems Quarterly, 13, 319-340.
Halperin, R. (2005). Learning technology in higher education: A structurational perspective on technology-mediated learning practices (Doctoral dissertation). London: London School of Economics. Hara, N., & Kling, R. (2000). Student distress in a Web-based distance education course. Information, Communication and Society, 3(4), 556-579. Kerr, M. S., & Rynearson, R. (2006). Student characteristics for online learning success. Internet and Higher Education, 9, 91-105. Leonard, J., & Guha, S. (2001). Students’ perspectives on distance learning. Journal of Research on Technology in Education, 34(1). Liaw, S. S. (2002). Understanding user perceptions of WWW environments. Journal of Computer Assisted Learning, 18, 1-12.
0
The Role of Institutional Factors in the Formation of E-Learning Practices
Mazzolini, M., & Maddison, S. (2003). Sage, guide or ghost? The effects of instructor intervention on student participation in online discussion forums. Computers and Education, 40, 237-253. McDonald, J., & Mcateer, E. (2003). New approaches to supporting students: Strategies for blended learning in distance and campus based environments. Journal of Educational Media, 28(2-3), 129-146. Mitra, A., & Steffensmeier, T. (2000). Change in student attitudes and student computer use in a computer-enriched environment. Journal of Research on Computing in Education, 32(3), 417-431. Nachmias, R. (2002). A research framework for the study of a campus-wide Web-based academic instruction project. Internet and Higher Education, 5(3), 213-229. Ngai, E., & Poon, J. (2007). Empirical examination of the adoption of WebCT using TAM. Computers and Education, 42(2), 250-267. Oliver, M., & Shaw, G. P. (2003). Asynchronous discussion in support of medical education. Journal of Asynchronous Learning Networks, 7(1), 56-67. Omalley, J., & McCraw, H. (1999). Student perceptions of distance learning, online learning and the traditional classroom. Online Journal of Distance Learning Administration, 2(4), 1-16. Orlikowski, W. J. (2000). Using technology and constituting structure: A practice lens for studying technology in organizations. Organizational Science, 11(4), 404-428. Penuel, B., & Roschelle, J. (1999). Designing learning: Cognitive science principles for the innovative organization. Stanford Research Institute International, 1-26. Pettigrew, A. (1990). Longitudinal field research on change: Theory and practice. Organization Science, 1(3), 267-291.
0
Phipps, R., & Merisotis, J. (1999). What's the difference? A review of contemporary research on the effectiveness of distance learning in higher education. Washington, DC: The Institute for Higher Education Policy. Picciano, A. (2002). Beyond student perceptions: Issues of interaction, presence, and performance in online course. Journal of Asynchronous Learning Networks, 6(1), 21-40. Potelle, H., & Rouet, J. F. (2003). Effects of content representation and readers’ prior knowledge on the comprehension of hypertext. International Journal of Human-Computer Studies, 58, 327-345. Robey, D., & Bourdreau, M. (1999). Accounting for the contradictory organizational consequences of information technology: Theoretical directions and methodological implications. Information Systems Research, 10(2), 167-185. Rogers, E. M. (1995). Diffusion of innovations. New York: Free Press. Saba, F. (1999). Is distance education comparable to traditional Education? Retrieved October 19, 2007, from http://www.distance-educator.com/ der/comparable.html Salmon, G. (2000). E-moderating: The key to teaching and learning online. London: Kogan Page. Sanders, D., & Morrison-Shetlar, A. I. (2001). Student attitudes towards Web-enhanced instruction in an introductory biology course. Journal of Research on Computing in Education, 33(3), 251-262. Selim, H. M. (2003). An empirical investigation of student acceptance of course Web sites. Computers and Education, 40, 343-360. Selwyn, N. (1999). Students’ attitudes towards computers in sixteen to nineteen education. Education and Information Technologies, 4(2), 129-141.
The Role of Institutional Factors in the Formation of E-Learning Practices
Song, L., Singleton, E. S., Hill, J. R., & Koh, M. H. (2004). Improving online learning: Students perceptions of useful and challenging characteristics. Internet and Higher Education, 7(1), 59-70. Tu, C. (2000). Critical examination of factors affecting interaction on CMC. Journal of Network and Computer Applications, 23, 39-58. Valenta, A., Theriault, D., Dieter, M., & Mrtek, R. (2001). Identifying student attitudes and learning styles in distance education. Journal of Asynchronous Learning Networks, 5(2), 111-127. Waxman, H. C., Lin, M., & Michko, G. M. (2003). A meta-analysis of the effectiveness of teaching and learning with technology on students outcomes. Naperville, IL: Learning Point Associates. Webster, J., & Hackley, P. (1997). Teaching effectiveness in technology mediated distance learning. Academy of Management Journal, 40(6), 1282-1309. Williams, P. E. (2003). Roles and competencies for distance education programs in higher education institutions. The American Journal of Distance Education, 17(1), 45-57. Wu, D., & Hiltz, S. R. (2004). Predicting learning from asynchronous online discussions. Journal of Asynchronous Learning Networks, 8(2), 139152. Yin, R. K. (1984). Case study research: Design and methods. Thousand Oaks, CA: Sage.
AddItIonAL reAdIng
Bullock, C., & Ory, J. (2000). Evaluating instructional technology implementation in higher education environments. American Journal of Evaluation, 21(3), 315-328. Huynh, M. Q., Umesh, U. N., & Valacich, J. S. (2003). E-learning as an emerging entrepreneurial enterprise in universities and firms. Communications of the Association for Information Systems, 12, 48-68. Kim, K., & Bonk, C. J. (2002). Cross-cultural comparisons of online collaboration. Journal of Computer Mediated Communication, 8(1), 1-31. Kling, R., & Iacono, S. C. (1987). The institutional character of computerised information systems. Office: Technology and People, 5(1), 7-28. Lamb, R., & Kling, R. (2003). Reconseptualizing users as social actors. Management Information Systems Quarterly, 27(2), 197-235. Laurillard, D. (2001). Rethinking university teaching: A framework for the effective use of learning technologies (2nd ed.). London: Routledge Falmer. Olsen, G. M., & Olsen, J. S. (2000). Distance matters. Human-Computer Interaction, 15, 139178.
Chapter VII
E-Learning Value and Student Experiences: A Case Study
Krassie Petrova Auckland University of Technology, New Zealand Rowena Sinclair Auckland University of Technology, New Zealand
ABstrAct This chapter focuses on understanding how the value of student learning and the student learning experience could be improved given pertinent environmental and academic constraints of an e-learning case. Believing that a better understanding of student behaviour might help course design, the chapter revisits the outcomes of two studies of e-learning and analyses them further using a framework which conceptualises the value of e-learning from a stakeholder perspective. The main objective of the chapter is to identify some of the important issues and trends related to the perceived e-learning value. The analysis of the emerging and future trends indicates that in the future blending of e-learning and face-toface learning is likely to occur not only along the pedagogical, but also along the technological and the organizational dimensions of e-learning. Therefore, new blended learning and teaching models should emphasise further the alignment of learning with work/life balance.
Copyright © 2008, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
E-Learning Value and Student Experiences
IntroductIon E-learning is used as a comprehensive term to identify the use of a variety of information and communication technologies to enhance and support learning, often blending their use. Online learning can be defined as an implementation of elearning using Web-based technologies (Petrova, 2007). Online learning and e-learning are used as synonyms throughout the text. Across universities worldwide, participants’ engagement and achievement and the support provided by educational technology have become the subject of intensive research, development, and discussion (Blinco, Mason, McLean, & Wilson, 2004; Buzzetto-Moore & Pinhey, 2006; Kickul & Kickul, 2006; Lee & Nguyen, 2005; Sharpe & Benfield, 2005). The work presented here has a specific focus on understanding how the quality of student learning and the student learning experience could be improved whilst working within environmental and academic constraints, believing that a better understanding of student behaviour might help course design. The main objective of the chapter is to identify some of the important issues and trends related to the perceived value of e-learning. To this end, the outcomes of two studies of e-learning are revisited and analysed further using a framework, which conceptualises the value of e-learning. Current and emerging trends about the drivers of student satisfaction are discussed and recommendations are presented.
BAckground E-learning was first introduced into the undergraduate business programme at the New Zealand university used in this case study as early as 1999. However, since these early adoption days e-learning has become widespread across the whole university and its importance is now recognised as a strategic approach to providing a
learning environment that promotes and supports student success. The programme used in this case study is a typical three-year undergraduate programme. A cornerstone of its philosophy is to encourage independent student led learning. Entrants to the programme come from a range of backgrounds. Due to ethnic diversity, some students might have English as an alternative language and even full time students work long hours. E-learning was introduced in an attempt to alleviate some of these problems. However there is evidence to suggest that the continuing effort involved in developing and delivering e-learning courses may lead to a significant demand on academics’ time and institutional resources, as the amount of individual attention needed may “rival a one-to-one course” (Tastle, White, & Shackleton, 2005, p. 249). Since 1999, e-learning within the case study programme case has gradually developed into two distinct teaching and learning models of Web-based online learning, known as “flexible mode” and “enhanced mode.” Both models belong to the category of “hybrid” or “blended” learning (Mortera-Gutierrez, 2006; Petrova, 2001) as their delivery format combines face-to-face and online teaching and learning. In enhanced mode, e-learning is used to complement (in-class) and enhance (off-campus) the 3 hours per week classroom teaching by using the institutional e-learning platform (BlackBoard™). E-learning activities include exercises and demonstrations; off-campus they are mostly used as a vehicle for questions and answers about the course and assessment. As a rule, in enhanced mode online activities are not formally assessed. In flexible mode, a portion of the face-to-face teaching is replaced by the equivalent time in online activities, performed off-campus, in the students’ own time. Students are given detailed instructions about the e-learning activities they are expected to engage in, and about the expected outcomes. The “flexible” online activities may
E-Learning Value and Student Experiences
be either individual or group, and often require significant preparatory research. Typically, they will have a fixed completion deadline, and may be incorporated into the assessment programme. The overall spread of e-learning in the case study programme is relatively high: following their specific study pathway, a typical undergraduate student might be engaged in e-learning in up to 58% of their studies (Petrova & Sinclair, 2005). This rather “massive” advent of e-learning has introduced a significant change to many aspects of the teaching and learning environment, including stakeholder perceptions about its value.
e-LeArnIng vALue: stAkehoLder PersPectIves Studies in the area of change processes and management related to the introduction of new educational technologies have found that students might be resistant to change. In an early article on the use of information technology to enhance education in business schools, Leidner and Jarvenpraa (1995) pointed out that there was a need to understand better the role of students in learning models involving information technology, and suggested that students would be “likely to resist the new learning models” (Leidner & Jarvenpaa, 1995, p. 287). Students are identified as one of the recognised stakeholder groups involved in e-learning, therefore any emerging organizational formats developed to accommodate this educational paradigm need to be managed carefully in order to avoid early student disillusionment and the subsequent failure of students to realise the full education potential of e-learning (Hunt, Thomas, & Eagle, 2002; Sharpe & Benfield, 2005). Student participation in e-learning and student perceptions in particular have been the emphasis of research (Hisham, Campton, & FitzGerald, 2004; Lizzio, Wilson, & Simons, 2002; Phillimore, 2002; Swan, 1995; Wells, Fieger, & de Lange, 2005).
More specifically Lizzio, et al. (2002) found that student perceptions of the teaching and learning environment and the assessment practice contribute to the development of deeper approaches to studying. They established that positive perceptions of the environment directly influence both measured academic outcomes, for example, academic achievement and also qualitative learning outcomes, workplace related skills. Other studies have highlighted usage patterns in terms of time, place, and functional components (Blinco et al., 2004; Burr & Spennemann, 2004; McKnight & Demers, 2002). An important point made in the reviewed research studies and reports inform the studies presented here: analysis of students’ perceptions in conjunction with factual data can provide a valuable input to the processes of curriculum development and management (Burr & Spennemann, 2004; Buzetto-More & Pinhey, 2006; Kickul, & Kickul, 2006; McKnight & Demers 2002; Sharpe & Benfield, 2005). Two studies were carried out during the period 2003-2005, both investigating the case study programme. Based on the assumption that improving the scholarship of e-learning depends on understanding stakeholders’ perspectives, the overall research framework used in the studies (Figure 1) includes students as they interact with the e-learning platform in the context of courses delivered online, while academics participate in e-learning as course developers and implementers. E-learning is facilitated by the organizational formats and structures of the university. The work aimed to identify and explore criteria for stakeholder evaluation of e-learning, to identify patterns of online platform usage, and to provide a basis for the understanding of student satisfaction with e-learning. Two research questions were investigated: 1. 2.
What is the perceived value of e-learning from a stakeholder perspective? Are students satisfied with e-learning and what are the manifestations of satisfaction?
E-Learning Value and Student Experiences
In the first study, data were collected in 2003 using an anonymous questionnaire distributed to 44 academics, 6 managers, and 75 students. Students and academics were selected from across the courses in the case programme. Managers represented the organization at a senior level. In the second study, data were collected from two sources: the statistical reports provided by BlackBoard™ (collated for the months of August, September, and October, 2004) and the responses to an anonymous questionnaire distributed to students. The questionnaire was sent at the end of 2004 to 730 participants in both “flexible” and “enhanced” courses (452 and 278 students respectively). Some of the findings of the two studies were reported in more detail by Sinclair (2003b), and by Petrova and Sinclair (2005). A framework for further analysis of the issues emerging from the findings of the studies is presented in the next section.
A vALue FrAmework For e-LeArnIng The issue of “value” is central to the operation of most organizations operating in a competitive environment. In the business sector “value” can be many things, for example, offering valuable customer services. In the education sector the issue of value is broad, as there are so many stakeholders with differing viewpoints of what constitutes value. In this market-driven education environment, tertiary institutions need to establish their credentials within their niche market. Potential students need a reliable indicator of value to enable them to navigate the huge number of courses available without falling victim to unlicensed “Web-cowboy” operators and “digital diploma mills” (Hope, 2001) where the emphasis is on taking students’ money rather than on any real concern for the value of the students’ learning. Poor value can be reflected in students withdrawing from a course
Figure 1. E-learning and stakeholder framework (Adapted from Petrova & Sinclair, 2005)
E-Learning Value and Student Experiences
or not attending. Word of mouth can result in one disaffected student telling 10 others of their experience. This can mean a huge decrease in student enrolments as students take advantage of courses from other tertiary institutions. In the face of increasing costs tertiary institutions are looking at ways to decrease spending. Online courses are less constrained by infrastructure than face-to-face courses and have the potential to have a lower cost/student ratio. This can lead to a tertiary institution’s decision to choose quantity over the perceived value of the course (Heerema & Rogers, 2001). Institutions must realise that value should never be compromised as in market-driven environments students have the freedom of choice and will move if the value, in their eyes, deteriorates. It is interesting in all this discussion on value that McLoughlin and Luca (2001) consider that technology has yet to make significant improvements in the value of education being offered. This possibly reflects the current emphasis of online courses, which is to make education more accessible to students and replicate, rather than improve, what is in the face-to-face courses. If the perceived value of e-learning is lowered, the credibility of the course will ultimately diminish in the eyes of employers when graduates cannot meet expected outcomes. This will lead students elsewhere, as they would want a qualification that employers recognise. To ensure there is value in the worldwide online courses, several universities have joined global alliances such as the Global University Alliance (2000) and the World Alliance in Distance Education (2002). These alliances have focused on ensuring value in online learning and in providing students with a wide variety of quality online courses that they can access from different locations. Another way in which tertiary institutions and accrediting organizations have tried to increase the perceived value is by developing benchmarks for online courses (Sinclair, 2003a). In the United Kingdom (UK) the Quality Assurance Agency
(2002) has developed distance learning guidelines at the request of the distance learning community in the UK who recognised the importance of having a code of practice to assure value in the offered courses (Cavanaugh, 2002). In the United States, the Institute for Higher Education Policy (2000) developed a list of 45 benchmarks and in Canada, the Canadian Association for Communication Education sponsored a project to develop quality distance education guidelines (Barker, 2002; FuturEd, 2002). Whilst these “solutions” to the issue of value may be appropriate they have a weakness in that they may be focused on the needs of the accrediting organizations and the tertiary institutions rather than on the needs of students and academics. The importance of a framework that looked at the perspectives of different stakeholders was highlighted in 2002 when the Council for Higher Education Accreditation (2002) held an international seminar where two of the three key speakers discussed the importance of a framework that ensured there was value in e-learning. These suggestions related to a proposal that accreditation of higher education should be part of the General Agreement on Trade in Services (GATS) of the World Trade Organization (Council for Higher Education Accreditation, 2002). This is a concern, as there is the potential that international bureaucrats rather than the education sector of each country would manage standards. Reflecting this concern, research was done to develop a framework that incorporated value from three stakeholders groups’ opinions—academics, students, and the management within the New Zealand University used in this case. Online learning was looked at as a whole and there was no distinction made between the two models of e-learning, that is, flexible and enhanced. This lack of distinction could have affected the results, especially if students perceived that the flexible component did not add value to the course. What could also affect results would be students who did not consider that the flexible component of
E-Learning Value and Student Experiences
their course was important. However, the next section which focuses on students’ perspectives will look at the two models of e-learning and highlight any differences. Initially, a comprehensive list of criteria about what would be considered valuable in online learning was developed from each stakeholder group’s perspective. The criteria for academics was determined by a nominal group (Brahm & Kleiner, 1996) made up of experienced academics of online learning from different business disciplines. A nominal group was considered appropriate as it could generate ideas about value criteria and then prioritise these ideas (Uribe, Schweikhart, Pathak, Marsh, & Fraley, 2002). To determine students’ criteria for value, various studies (Berman & Pape, 2001; Cashion & Palmieri, 2000; Inglis, 1999; Lambert, 1996; Ponzurick, France, & Logar, 2000; Scott, 2001) were identified which looked at the different factors that made up online learning. To determine value criteria from the organisation’s perspective, the accreditation requirements of various accrediting agencies and tertiary institutions standards were looked at to determine and consider a list of value criteria (Barker, 2002; Distance Education and Training Council, 2002; Institute for Higher Education Policy, 2000; Southern Regional Education Board,
2000; Western Interstate Commission for Higher Education, 2002). Once these criteria were established, questionnaires were developed for each of the three stakeholder groups. Stakeholders were surveyed using an anonymous questionnaire. For students the response rate was 62% (47 questionnaires returned), for academics the response rate was 32% (14 questionnaires returned), and for managers the response rate was 83% (5 questionnaires returned). Stakeholders were asked to “rank” the importance of each criterion using the analytic hierarchy process (AHP) scale developed by Saaty (1994). This 1-9 scale has 17 steps which sought to capture the sensitivity of criteria that were preferentially close to one another (Davies, 2001). Tullous and Utecht (1994) considered that evaluating multiple criteria simultaneously amongst different stakeholder groups was not an easy task. AHP was used to overcome this problem as it provided a structure and procedure for incorporating different stakeholders’ criteria. The results from the questionnaires were consolidated by stakeholder into the top 10 criteria that would be essential for valuable e-learning delivered courses. Students’ criteria focused on the materials, teaching, and information available about the course. Cashion and Palmieri (2000) refer to
Table 1. E-learning value criteria (Adapted from Sinclair, 2003b) 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
Convenient and secure access to learning platform Sufficient information supplied to students about the paper Technology utilised is appropriate Qualified academics Qualified technical staff Academics are prepared at start of the semester Materials available at start of the semester Materials easy to use Materials up to date and accurate Paper recognised by employers Learning outcomes are appropriate for flexible delivery Readings and activities provide academic challenge Training on the platform available to students Approaches to learning encourages active learning Development of critical thinking skills by students Academics provide clarity on flexible tasks. Feedback by academic constructive
E-Learning Value and Student Experiences
Figure 2. E-learning value framework (Derived from Sinclair, 2003b)
“learner readiness” but this research highlighted the importance of “academic readiness,” that is, the academic is prepared and materials are available at the start of the semester. Academics’ criteria covered the materials but also looked at the level of learning that takes place, that is, considering critical thinking more important than a straight recall of facts. The managers in the organization were concerned with the materials and teaching but also that active learning takes place. The top 10 criteria from each stakeholder group were merged into a common list of seventeen criteria (Table 1). Fifteen of the criteria were ranked high by all the stakeholders. Two criteria were not ranked highly by students, who did not consider that materials and approaches to learning should provide an academic challenge. This reflects the tendency of students to concentrate on passing the course, that is, the present, rather
than their employability, the future. Students tend to undervalue the importance of critical thinking and academic challenge over extending their knowledge, which would hopefully make them more employable. Five distinct categories emerged from the list; accessibility, components, satisfaction, learning experience, and interaction. The subsequent classification of the criteria under the appropriate category allowed a value framework for e-learning to be constructed (Figure 2). This framework will be used to measure the value of an online course by analyzing the data which is relevant to the components of the framework. The next section analyzes data collected from and about one of the stakeholder groups (students). The issues identified are aligned with the five categories of the e-learning value framework (Figure 2).
E-Learning Value and Student Experiences
PerceIved vALue And overALL student sAtIsFActIon wIth e-LeArnIng
aligned with framework categories emerged from the findings (Figure 3).
Issue 1: “Accessibility” To investigate the student perspective, two sources of data were used: the actual usage patterns of the online platform were investigated in conjunction with a study of the perceived satisfaction of e-learning. Students were surveyed using an anonymous questionnaire. In enhanced mode, the response rate was 71% (197 questionnaires returned), with 84% of the respondents regarding themselves as full-time students and 50% generally inclined to prefer e-learning to face-to-face learning. In flexible mode the response rate was 65% (294 questionnaires returned) with 85% of the respondents regarding themselves as full-time students and 52% preferring e-learning to faceto-face learning. The e-learning value framework proposed in the previous section (Figure 2) was applied to analyse the data collected. Five main issues
The first issue emerged from the investigation of the time dimensions of the actual usage of the online platform. Online activities took place predominantly during the hours of the day, with activities slowing down in the evening. The curve for participants in flexible mode peaks later in the day compared with the curve for enhanced mode (Figure 4). The days from Monday to Thursday were characterised by heavier usage compared to Friday to Sunday, with students in flexible mode more active during the weekend (Figure 5). As shown in more detail in Petrova and Sinclair (2005), there was very little variation in these two patterns across the semester, or by course level. In summary, it appears that e-learning as undertaken in this case study is not too different from face-to-face learning in terms of “when”
Figure 3. Issues with perceived value of e-learning (students)
E-Learning Value and Student Experiences
Figure 4a. Average daily and hourly use of the online platform
Figure 4b. Average daily and hourly use of the online platform
it occurs. The 24/7 access and the possibility to study at any time may not be highly important as most students may still prefer to “e-learn” at the same time as they would study normally.
Issue 2: “components” The top four most used components of Blackboard™ were the Content Area, the Groups Area, the Discussion Board, and the Announcements Area. There were variations according to the on-
0
line learning model: in enhanced mode the Content Area was the most commonly used feature while in flexible mode the most used component was the Groups Area. The Discussion Board and to a lesser extent the Announcements Area were used similarly in the two models (Figure 6). When students were asked to indicate which components they had used, both groups gave a similar percentage of “yes” answers for the Announcements Area and the Content Area (Figure 7). A greater number of “flexible” participants
E-Learning Value and Student Experiences
Figure 6. Use of the online platform by component
Figure 7. Use of the online platform by component
had used the Discussion Board compared to “enhanced” participants (90% vs. 58%), and similarly the Groups Area (85% vs. 46%). In other words, the value of the online platform according to students is signified by its dual role as a course content organiser and a communication channel between them and the academic. Data indicates that there are different component usage patterns within each mode. In enhanced mode the Content Area component was used most followed by Announcements, Discussion
Board, and the Groups Area. There was less emphasis on the use of online components aiming at developing student capabilities. Students used BlackBoard™ predominantly for information gathering. This pattern is fairly typical for courses where delivery is enhanced with e-learning (Lee & Nguyen, 2005; Phillimore, 2002). In flexible mode, there was a much greater use of e-learning for communicative activities, such as group work and online discussions. A much wider range of capabilities are being addressed and the emphasis
E-Learning Value and Student Experiences
on using the online platform for this purpose may reflect the wide use of collaborative learning in flexible mode. It appears that the most valuable components of the learning platform (irrespective of the elearning model) are the ones providing storage, organization, and direct communication facilities. The pattern of use of the top four components differs with respect to the e-learning model. Components related to capability development may be more valuable to participants involved in flexible rather than in enhanced e-learning.
Issue 3: “student satisfaction” The survey instrument addressed student satisfaction with e-learning through the set of two
questions shown in Table 2, which are used as general satisfaction indicators. The overall level of student satisfaction with e-learning is shown in Figure 8, with the values of the first two indicators above 50%. The two learning modes (enhanced and flexible) display the same trend. However the graph highlights an issue: while most students will be happy to take another e-learning course (meaning that they see value in it), a lesser number are prepared to recommend the course to a peer—especially in enhanced mode (58%). The reluctance to declare publicly that the course is of high value suggests some uncertainty on the part of students as to the benefits of e-learning. In other words, it cannot be concluded with confidence that students are convinced e-learning is more beneficial than face-to-face learning.
Table 2. General satisfaction indicators Would you choose another [course] with a flexible (or enhanced) option? (Yes/No) Would you recommend this [course] to another person based on its flexible (or enhanced) mode of delivery? (Yes/No)
Figure 8. Overall student satisfaction and student experience indicators
E-Learning Value and Student Experiences
Table 3. Student experience indicators Statements ranked on a Likert scale from 1 (strongly disagree) to 5 (strongly agree) So far my experiences with this course have been positive. The online mode of this [course] met my expectations.
Table 4. Specific student satisfaction indicators Statements ranked on a Likert scale from 1 (strongly disagree) to 5 (strongly agree) Assessment tasks were supported by [BlackBoard™] [BlackBoard™] supported communication between academics and students well [BlackBoard™] helped me to keep up to date with changes, deadlines and notices. [BlackBoard™] provided adequate storage for course materials. [BlackBoard™] provided adequate additional course materials.
Issue 4: “student Learning experience” The survey addressed students’ perceptions about the role of their e-learning experience through two indicators (Table 3). Over 70% of students rated their experience as positive, as shown in Figure 8. However, students’ overall expectations of the course they had taken were met by only 62% of students in flexible mode and 66% of students in enhanced mode. The graph in Figure 8 highlights the issue: students are positive about their own e-learning experience but are not so sure if their expectations were met. Similar to the issue discussed previously, the data analysis so far has not offered a plausible explanation, except for the speculative suggestion that, in fact, students were not sufficiently informed about all aspects of the course prior to starting it, to be able to conceive reasonable expectations.
Issue 5: “online Interaction” Specific aspects of student satisfaction with elearning were addressed through the indicators
in Table 4. Figure 9 shows the level of satisfaction with five different pedagogical aspects of e-learning (assessment, communication with academic, communication about the course, access to course material, access to additional material). All indicators are above 60%. There was a higher level of satisfaction in flexible mode with the ways in which e-learning related to communication with the academic, communication about the course, and assessment. In flexible mode, students were most satisfied with course communication (82%) while in enhanced mode students were most satisfied with the support for course content storage (83%). Thus a fifth issue emerged from the data that related to the adequacy of the level of online interaction. It seems that students (who are mostly studying full time) are engaged in e-learning in a similar way in both flexible and enhanced mode; it may be concluded that the design differences between the two blended e-learning models have not led to significant difference in the use of the learning environment (also supported by the data discussed in Issue 1). In both modes the same BlackBoard ™ components are used and emphasis is on tools which improve communication and
E-Learning Value and Student Experiences
Figure 9. Student satisfaction with e-learning pedagogical aspects (average)
also course organization but not on tools which support the development of student capabilities. This is somewhat in contrast with the expectation that blended e-learning models would offer more and diverse opportunities for deep learning and that the mix of face-to-face instruction and computer-based communication, including the Internet in a blended learning situation “will create a myriad of educational possibilities that reflects … pedagogical richness” (Mortera-Gutierrez, 2006, p. 317).
Future trends Future trend patterns emerging from the data about student perceptions of e-learning value, and the data about actual and perceived e-learning platform usage cluster around “course design” (e.g., the e-learning model and the underpinning pedagogy), and “course delivery” (e.g., the elearning platform). The first pattern relates to course design. It was evident from the findings that students’ experiences were positive but not overwhelmingly so as they needed a better roadmap of the e-learning
journey. It might be expected that students will gradually develop their own effective online study habits (Sharpe & Benfield, 2005). Still the future of e-learning heavily depends on academics providing clear explanations about the purpose of online work and expected involvement, and succinct instructions addressing student responsibilities. The case data provides evidence indicating that students are reasonably well satisfied with the level and quality of online interaction. According to Kickul and Kickul (2006), the perceived value of e-learning and hence satisfaction with its value correlates positively with the level of online interaction among proactive learners. Therefore the e-learning of the future will need to “embrace learning solutions that are built upon the principles of connectedness, communication, creative expression, collaboration and competitiveness,” to quote Adobe Systems’ Ellen Wagner in Neal (2006). The second pattern relates to course delivery. It was interesting to observe, for example, that only half of the students who had already been exposed to e-learning (52%) showed a preference for using it. Possible reasons may include
E-Learning Value and Student Experiences
students not understanding the role of the medium in an on-campus university (Sharpe & Benfield, 2005), students finding it difficult to adapt to the change of the educational model (Mortera-Gutierrez, 2006), students having time management problems (Hunt et al., 2002). In future learning environments it might be expected that blending will commonly occur in the area of e-learning platform support as, for example, the intelligent tutoring system where text messages are stored in Web accessible format and later disseminated (Silander & Rytkohen, 2005). With regard to the supporting IT infrastructure, blended e-learning models may need to support a more diverse range of communication channels and more sophisticated tools for detailed feedback on assessment activities and thus to provide more stimuli for developing students as highly motivated e-learning participants (Hisham et al., 2004; Wentling, Waight, Gallaher, La Fleur, Wang, & Kanfer, 2000). Another trend observed was the use of the e-learning platform predominantly during the daytime on weekdays; with similar results reported by Burr and Spennemann (2004) and earlier by McKnight and Demers (2002), it seems that whilst access 24/7 is required, the emphasis should be on providing sufficient capacity and technical support during normal business hours. This relates to the raising importance of work/life balance (Goode, 2003). Based on the patterns discussed above, four likely drivers of future learner’s satisfaction with e-learning can be identified: the appropriateness of pedagogy, the level of interaction, the level of blending of models and platforms, and the balance between “life” and study. These results confirm some prior research findings (Gerbic, 2002; Petrova, 2002; Sinclair, 2003b). The alignment of the drivers is also consistent with the initial research framework (Figure 1) in which the e-learning environment is created through stakeholder participation in the two basic teaching and learning processes: course design and course delivery.
concLusIon The work presented in this chapter investigates stakeholders’ perceptions about the value of online learning in a New Zealand undergraduate business degree, based on the premise that advancing e-learning needs to be grounded in a good understanding of the value attributed to elearning and of the indicators of overall student satisfaction with e-learning. An e-learning value framework was proposed and used to study data collected though a survey and from BlackBoard™ records. The analysis of the emerging and future trends showed that in the future blending is likely to occur not only along the pedagogical, but also along the technological and even the organizational dimension of e-learning and should have an emphasis on aligning with work/life balance. Stakeholders’ increased expectations of e-learning value will continue to present a challenge and will provide an area of fruitful further research.
Future reseArch dIrectIons The importance of student understanding and satisfaction with both online delivery models and features of the e-learning environment, and the need to provide effective interaction and participation mechanisms to online learners encourages future research in several directions. Further research into student adoption of e-learning, applying well established information technology adoption models, may help to better understand student motivation in specific contexts (Ndubisi, 2006) while studies with a focus on a particular discipline, for example, accounting, may help enhance course design (Flynn, Concannon, & Bheachain, 2005; Wells et al., 2005). Along with more in-depth studies of student satisfaction, motivation, and online learning styles (Hisham et al., 2004; Sharpe & Benfield, 2005), a more detailed investigation of the factors driving academic
E-Learning Value and Student Experiences
motivation (Tastle et al., 2005) and the required special training is also needed. The cases presented support the notion that although students are satisfied with e-learning in a course currently taken, they might not have formed a sufficiently positive attitude towards e-learning in general and therefore cannot recommend it to others with confidence. Therefore, studying student perceptions and satisfaction with e-learning will need to continue, as also evidenced by works such as Flynn et al. (2005), Hisham, et al. (2004), Hunt et al.(2002), Ndubisi (2006), Selim (2005), and Wells et al.(2005). With the observed increase in the range of user interfaces, physical devices and supporting infrastructure driven by new and emerging information and communication technologies (Blinco et al., 2004), further research is needed in the area of blended models such as blending content from different sources such as multimedia (Verhaart & Kinshuk, 2004), blending content with learning processes (Britain, 2004; Buzzetto-More & Pinhey, 2006), and blending delivery platforms as, for example, the use of mobile networks (Petrova, 2007) which will help create a more satisfactory and fulfilling e-learning environment. Finally, further research will help identify and conceptualise advanced blended learning models.
reFerences Barker, K. (2002). Canadian recommended e-learning guidelines. CACE. Retrieved October 19, 2007, http://www.futured.com/pdf/ CanREGs%20Eng.pdf Berman, S. H., & Pape, E. (2001). A consumer’s guide to online courses. School Administrator, 58(9), 14. Blinco, K., Mason, J., McLean, N., & Wilson, S. (2004, July 19). Trends and issues in e-learning infrastructure development. A White paper for alt-i-lab 2004, prepared on behalf of DEST
(Australia) and JISC-CETIS (UK) (Version 2). Retrieved October 19, 2007, from http://www. jisc.ac.uk/uploaded_documents/Altilab04-infrastructureV2.pdf Brahm, C., & Kleiner, B. H. (1996). Advantages and disadvantages of group decision-making approaches. Team Performance Management, 2(1), 30-35. Britain, S. (2004, May). A review of learning design: Concept, specifications and tools. JISC. Retrieved October 19, 2007, from http://www.jisc. ac.uk/uploaded_documents/ACF1ABB.doc Burr, L., & Spennemann, D. H. R. (2004). Patterns of user behaviour in university online forums. International Journal of Instructional Technology and Distance Learning, 1(10), 11-28. Buzzetto-More, N. A., & Pinhey, K. (2006). Guidelines and standards for the development of fully online learning objects. Interdisciplinary Journal of Knowledge and Learning Objects, 2, 95-104. Cashion, J., & Palmieri, P. (2000). Quality in online learning: Learners views. Retrieved October 19, 2007, from http://flexiblelearning.net. au/nw2000/talkback/p14-3.htm Cavanaugh, C. (2002). Distance education quality: The resources-practices-results cycle and the standards. Retrieved October 19, 2007, from http://www.unf.edu/~caavanau/2569.htm Council for Higher Education Accreditation. (2002). International quality review. Retrieved October 19, 2007, from http://www.chea.org/international/inter_summary02.html Davies, M. (2001). Adaptive AHP: A review of marketing applications with extensions. European Journal of Marketing, 35(7), 872-893. Distance Education and Training Council. (2002). DETC accreditation overview. Retrieved October 19, 2007, from http://www.detc.org/content/freePublications.html
E-Learning Value and Student Experiences
Flynn, A., Concannon, F., & Bheachain, C. N. (2005). Undergraduate students’ perceptions of technology-supported learning: The case of an accounting class. International Journal on ELearning, 4(4), 427-444. FuturEd. (2002). Consumers guide to e-Learning. Retrieved October 19, 2007, from http://www. futured.com/pdf/ConGuide%20Eng%20CD.pdf Gerbic, P. (2002). Learning in asynchronous environments for on campus students. In R. Kinshuk, K. Lewis, R. Akahori, T. Kemp, L. Okamoto, C. Henderson & H. Lee (Eds.), Proceedings of the 9th International Conference on Computers in Education (Vol. 2, pp. 1492-1493), Auckland, New Zealand: Asia-Pacific Society for Computers in Education. Global University Alliance. (2000). About GUA. Retrieved October 19, 2007, from http://www. gua.com/shell/gua/index.asp Goode, V. L. (2003). Lifestyle in the balance. Chartered Accountants Journal, 82(3), 22-24. Heerema, D. L., & Rogers, R. L. (2001). Avoiding the quality/quantity trade-off. T.H.E. Journal, 29(5), 14-21. Hisham, N., Campton, P., & FitzGerald, F. (2004). A tale of two cities: A study on the satisfaction of asynchronous e-learning systems in two Australian universities. In R. Atkinson, C. McBeath, D. Jonas-Dwyer, & R. Phillips (Eds.), Beyond the comfort zone: Proceedings of the 21st ASCILITE Conference (pp. 395-402). Perth, Australia: ASCILITE. Retrieved October 19, 2007, from http://www.ascilite.org.au/conferences/perth04/ procs/hisham.html Hope, A. (2001) Quality assurance. In G. Farrell (Ed.), The changing faces of virtual education (pp. 125-140). London: The Commonwealth of Learning. Hunt, L. M., Thomas, M. J. W., & Eagle, L. (2002). Student resistance to ICT in education.
In R. Kinshuk, K. Lewis, R. Akahori, T. Kemp, L. Okamoto, C. Henderson & H. Lee (Eds.), Proceedings of the 9th International Conference on Computers in Education (Vol. 2, pp. 964-968), Auckland, New Zealand: Asia-Pacific Society for Computers in Education. Institute for Higher Education Policy. (2000). Quality on the line. Retrieved October 19, 2007, from http://www.ihep.com/Pubs/PDF/Quality. pdf/ Inglis, A. (1999). Is online delivery less costly than print and is it meaningful to ask? Distance Education, 20(2), 220-232. Kickul, G., & Kickul, J. (2006). Closing the gap: Impact of student proactivity and learning goal orientation on e-learning outcomes. International Journal on E-Learning, 5(3), 361-372. Lambert, M. P. (1996). The distance education and training council: At the cutting edge. Quality Assurance in Education, 4(4), 26-28. Lee, Y. L., & Nguyen, H. (2005). So are you online yet?! Distance and online education today. In M. Khosrow-Pour (Ed.), Managing modern organizations with information technology. Proceedings of the 2005 Information Resources Management Association International Conference (pp. 10351036). San Diego, CA: Information Resource Management Association. Leidner, D., & Jarvenpaa, S. (1995). The use of information technology to enhance management school education: A theoretical view. MIS Quarterly, 19(3), 265-291. Lizzio, A., Wilson, K., & Simons, R. (2002). University students’ perceptions of the learning environment and academic outcomes: Implications for theory and practice. Studies in Higher Education, 27(1), 27-52. McKnight, R., & Demers, N. (2002). Evaluating course Web site utilization by students using Web tracking software: A constructivist approach. In
E-Learning Value and Student Experiences
Proceedings of the Technology, Colleges and Community Worldwide Online Conference 2002, Kapio’lani, Hi: University of Hawaii. Retrieved October 19, 2007, http://kolea.kcc.hawaii.edu/tcc/ tcon02/presentations/mcknight.html McLoughlin, C., & Luca, J. (2001). Quality in online delivery: What does it mean for assessment in e-learning environments? In G. Kennedy, M. Keppell, C. McNaught & T. Petrovic (Eds.), Meeting at the crossroads. Proceedings of the 18th Annual Conference of the Australasian Society for Computers in Learning in Tertiary Education (pp. 417-426). Melbourne, Australia: Australasian Society for Computers in Learning in Tertiary Education. Retrieved October 19, 2007, from http://www.ascilite.org.au/conferences/melbourne01/pdf/papers/mcloughlinc2.pdf Mortera-Gutierrez, F. (2006). Faculty best practices using blended learning in e-learning and in face-to-face instruction. International Journal on E-Learning, 5(3), 313-337. Neal, L. (2006, January 19). Predictions for 2006: E-learning experts map the road ahead. eLearn Magazine. Retrieved October 19, 2007, from http://www.elearnmag.org/subpage.cfm?section =articles&article=31-1 Ndubisi, N. O. (2006). Factors of online learning adoption: A comparative juxtaposition of the theory of planned behaviour and the technology acceptance model. International Journal on ELearning, 5(4), 571-591. Petrova, K. (2001).Teaching differently: A hybrid delivery model. In N. Delener & C. N. Chao (Eds.), Proceedings of the 2001 Global Business and Technology Association International Conference (pp. 717-727). Istanbul, Turkey: Global Business and Technology Association. Petrova, K. (2002). Course design for flexible learning. New Zealand Journal of Applied Computing and Information Technology, 6(1), 45-50.
Petrova, K. (2007). Mobile learning as a mobile business application. International Journal of Innovation in Learning, 4(1), 1-13. Petrova, K., & Sinclair, R. (2005). Business undergraduates learning online: A one semester snapshot. International Journal of Education and Development using Information and Communication Technology, 1(4), 69-88. Retrieved October 19, 2007, from http://ijedict.dec.uwi. edu/viewissue.php?id=6 Phillimore, R. (2002). Face to face lectures or econtent: Student and staff perspectives. In R. Kinshuk, K. Lewis, R. Akahori, T. Kemp, L. Okamoto, C. Henderson & H. Lee (Eds.), Proceedings of the 9th International Conference on Computers in Education (Vol. 1, pp. 211-212). Auckland, New Zealand: Asia-pacific Society for Computers in Education. Ponzurick, T. G., France, K.R., & Logar, C.M. (2000). Delivering graduate marketing education: An analysis of face-to-face versus distance education. Journal of Marketing Education, 22(3), 180-187. Quality Assurance Agency for Higher Education. (2002). Distance learning guidelines. Retrieved October 19, 2007, from http://www.qaa.ac.uk/public/dlg/dlg_textonly.htm Saaty, T. L. (1994). How to make a decision: The analytic hierarchy process. Interfaces, 24(6), 19-43. Scott, G. (2001). Assuring quality for online learning. Retrieved October 19, 2007, from http://www.qdu.uts.edu.au/pdf%20document/ QA%20for%200 Selim, H. M. (2005). Elearning critical success factors: an exploratory investigation of student perceptions. In M. Khosrow-Pour (Ed.), Managing modern organizations with information technology. Proceedings of the 2005 Information Resources Management Association Internatio-
E-Learning Value and Student Experiences
nal Conference (pp. 340-346). San Diego, CA: Information Resources management Association.
tasks. Journal of Applied Business Research, 10(1), 132-144.
Sharpe, R., & Benfield, G. (2005). The student experience of e-learning in higher education. Brookes eJournal of Learning and Teaching, 3(1), 1-9. Retrieved October 19, 2007, from http://www. brookes.ac.uk/publications/bejlt/volume1issue3/ academic/sharpe_benfield.pdf
Uribe, C. L., Schweikhart, S. B., Pathak, D. S., Marsh, G. B., & Fraley, R .R. (2002). Perceived barriers to medical-error reporting: An exploratory investigation. Journal of Healthcare Management, 47(4), 263-280.
Silander, P., & Rytkohen, A. (2005). An intelligent mobile tutoring tool enabling individualization of students’ learning processes. In Proceedings of the 4th World Conference on mLearning (paper 59), Cape Town, Republic of South Africa. Sinclair, R. M. S. (2003a). Components of quality in distance education. In G. Davies & E. Stacey (Eds.), Quality education @ a distance (pp. 257264). Boston: Kluwer Academic Publishers. Sinclair, R. M. S. (2003b). Stakeholders’ views of quality in flexibly delivered courses. Unpublished Masters Research Paper. Deakin University, Australia: Geelong. Southern Regional Education Board. (2000). Principles of good practice. Retrieved October 19, 2007, from http://ww.electroniccampus.org/ student/srecinfor/publicaitons/Principles_2000. pdf. Swan, M. K. (1995). Effectiveness of distance learning courses—Students’ perceptions. In Proceedings of the 22nd Annual National Agricultural Education Research Meeting (pp.34-38), Denver, CO. Retrieved October 19, 2007, from http:// www.ssu.missouri.edu/SSU/AgEd/NAERM/sa-4.htm Tastle, W. J., White, B. A., & Shackleton, P. (2005). E-learning in higher education: The challenge, effort, and return on investment. International Journal on E-Learning, 4(2), 241-250. Tullous, R., & Utecht, R. L. (1994). A decision support system for integration of vendor selection
Verhaart, M., & Kinshuk, C-K., (2004). Adding semantics and context to media resources to enable efficient construction to learning objects. In C. Kinshuk, K. Looi, E. Sutinen, D. G. Sampson, I. Aedo, L. Uden, & E. Kähkönen (Eds.), Proceedings of the 4th International Conference on Advanced Learning Technologies (pp. 651-653). Joensuu, Finland: IEEE Computer Society. Wells, P., Fieger, P., & de Lange, P. (2005, July). Integrating a virtual learning environment into a second year accounting course: Determinants of overall student perception. Paper presented at the 2005 Accounting and Finance Association of Australia and New Zealand Conference, Melbourne, Australia: Accounting and Finance Association of Australia and New Zealand. Wentling, T. L., Waight, C., Gallaher, J., La Fleur, J., Wang, C., & Kanfer, A. (2000). E-learning - a review of literature. Knowledge and Learning Systems Group, University of Illinois at UrbanaChampaign. Retrieved October 19, 2007, from http://learning.ncsa.uiuc.edu/papers/elearnlit. pdf . Western Interstate Commission for Higher Education. (2002). Best practice for electronically offered degree and certificate programs. Retrieved October 19, 2007, from http://www.wiche. edu/telecom/Article1.htm World Alliance in Distance Education, (2002). World alliance in distance education. Retrieved October 19, 2007, from http://www.wade-universities.org/index.htm
E-Learning Value and Student Experiences
AddItIonAL reAdIngs American Federation of Teachers. (2000). Guidelines for good practice. Retrieved October 19, 2007, from http://www.aft.org/pubs-reports/ higher_ed/distance.pdf Australian National Training Authority. (2002). Flexibility through online learning. National Centre for Vocational Education Research. Retrieved October 19, 2007, from http://www.ncver.edu. au/research/proj/nr1F12/nr1F12.pdf Bonk, C. J., & Graham, C. R. (2006). The handbook of blended learning: Global perspectives, local designs. New York: Pfeiffer Publishing. Borotis, S., Zaharias, P., & Poulymenakou, A. (2007). Critical success factors for e-learning adoption and sustainability: A holistic approach. In T. Kidd (Ed.), Handbook of research on instructional systems and technology. New York: Idea Group Inc. Calvert, J. (2003). Quality assurance and quality development: What will make a difference? In G. Davies & E. Stacey (Eds.), Quality education @ a distance (pp. 17-29). Boston: Kluwer Academic Publishers. Carnevale, D. (2000a). Assessing the quality of online courses remains a challenge. The Chronicle of Higher Education, 46(24), A59. Carnevale, D. (2000b). Study assesses what participants look for in high-quality online courses. The Chronicle of Higher Education, 47(9), A46. Council for Higher Education Accreditation. (2001). The role of accreditation and assuring quality in electronically delivered distance learning. Retrieved October 19, 2007, from http://www. chea.org/pdf/fact_sheet_2.pdf Council for Higher Education Accreditation. (2003). Important questions about diploma mills and accreditation mills. Retrieved October 19, 2007, from http://www.chea.org/degreemills/default.htm 0
Eaton, J. S. (2002). Maintaining the delicate balance: Distance learning, higher education accreditation, and the politics of self-regulation. Washington: American Council on Education Center for Policy Analysis. Ehlers, U. (2004, May). Quality in e-learning from a learner’s perspective. European Journal of Open Distance and E-Learning. Retrieved October 19, 2007, from http://www.eurodl.org/materials/contrib/2004/Online_Master_COPs.html E-learning in tertiary education: Where do we stand? (2005). Education & Skills, 4, 11-93. OECD, Centre for Educational Research and Innovation. Frydenberg, J. (2002). Quality standards in elearning: A matrix of analysis. International Review of Research in Open and Distance Learning, 3(2). Gilroy, P., Long, P. D., Rangecroft, J., & Tricker, T. (2001). Evaluations and the invisible student: Theories, practice and problems in evaluating distance education provision. Quality Assurance in Education, 9(1), 14-22. Hodges, C. B. (2004). Designing to motivate: Motivational techniques to incorporate in e-learning experiences. Journal of Online Interactive Learning, 2(3). Retrieved October 19, 2007, from http://www.ncolr.org/jiol/issues/viewarticle. cfm?volID=2&IssueID=8&ArticleID=31 Hoppe, G., & Breitner, M. H. (2003). Business models for e-learning. Retrieved October 19, 2007, from http://www.wiwi.unihannover.de/fbwiwi/ forschung/diskussionspapiere/dp287.pdf International Federation of Accountants. (2000). Quality issues for Internet and distributed learning in accounting education. New York: International Federation of Accountants. Retrieved October 19, 2007, from http://www.ifac.org/Members/DownLoads/EDC-QualityIssues.pdf
E-Learning Value and Student Experiences
Kettunen, J., & Kantola, M. (2006). Strategies for virtual learning and e-entrepreneurship in higher education. In F. Zhao (Ed.), Entrepreneurship and innovations in e-business. Hershey, PA: IRM Press. Lindh, J., & Soames, C. (2004). Are students’ and teachers’ views on online courses in accordance? A dual perspective on an online university course. Electronic Journal on eLearning, 21(1), 129-134. Ling, P., Arger, G., Smallwood, H., Toomey, R., Kirkpatrick, D., & Barnard, I. (2001). The effectiveness of models of flexible provision of higher education. Canberra, Australia: Department of Education, Training and Youth Affairs, Commonwealth of Australia. McPherson, M. (2002). Organizational critical success factors for managing e-learning. In R. Kinshuk, K. Lewis, R. Akahori, T. Kemp, L. Okamoto, L. Henderson & C. H. Lee (Eds.), Proceedings of the 9thIinternational Conference on Computers in Education, Vol. 2 (pp. 1540-1541). Auckland: Asia-Pacific Society for Computers in Education. Open and Distance Learning Quality Council. (2005). Standards in open & distance learning. Retrieved October 19, 2007, from http://www. odlqc.org.uk/standard.doc Parry, D. (2004). What do online learners really do, and where and when do they do it? Bulletin of Applied Computing and Information Technology, 2(2). Retrieved October 19, 2007, from http://www. naccq.ac.nz/bacit/0202/2004Parry_eLearners. html
Rosenberg, M. (2006). What lies beyond e-learning? Learning Circuits. Retrieved October 19, 2007, from http://www.learningcircuits.org/2006/ March/rosenberg.htm Sahay, S. (2004). Beyond utopian and nostalgic views of information technology and education: Implications for research and practice. Journal of the Association for Information Systems, 5(7), 282-313. Schmees, M. (2004). Integrating e-commerce into e-learning. In Proceedings of the 6th International Conference on Electronic Commerce (pp.177-186). Delft, The Netherlands: Association for Computing Machinery. Trentin, G. (2000). The quality-interactivity relationship in distance education. Educational Technology, 40(1), 17-27. Twigg, C. (2001). Quality assurance for whom? Retrieved October 19, 2007, from http://www. center.rpi.edu/pew5ym/mono3.pdf Valiathan, P. (2002). Blended learning models. Learning circuits: American society for training and development (ASTD)’s source for e-learning. Retrieved October 19, 2007, from http://learningcircuits.org/2002/aug2002/valiathan.html Zentel, P., Bett, K., Meister, D. M., Rinn, U., & Wedekind, J. (2003). A change process at university—Innovation through ICT? In R. Williams (Ed.), Proceedings of the 2nd European Conference on eLearning (pp. 507-513). Glasgow, United Kingdom: Academic Conferences International.
Chapter VIII
Integrating Technology and Research in Mathematics Education: The Case of E-Learning Giovannina Albano Università di Salerno, Italy Pier Luigi Ferrari Università del Piemonte Orientale, Italy
ABstrAct This chapter is concerned with the integration of research in mathematics education and e-learning. We provide an overview of research on learning processes related to the use of technology and a sketch of constructive and cooperative methods and their feasibility in an e-learning platform. Moreover, we introduce a framework for dealing with language and representations to interpret students’ behaviours and show examples of teaching activities. Finally some opportunities for future research are outlined. We hope to contribute to overcome the current separation between technology and educational research, as their joint use can provide matchless opportunities for dealing with most of the learning problems related to mathematical concepts as well as to linguistic, metacognitive, and noncognitive factors.
Copyright © 2008, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
Integrating Technology and Research in Mathematics Education
IntroductIon The main concern of this chapter is the integration of technology and research in the field of mathematics education. Currently technology is too often used with little or no concern for the results of educational research, despite the fact that they could provide valuable help to both magnify the outcomes and keep away from some unwelcome washback. Conversely, too often research in mathematics education disregards the impressive opportunities technology could provide. Through the chapter we focus on e-learning as a domain appropriate for integrating technology and educational research. We argue that nowadays technology is flexible enough to be used within different theoretical frameworks (such as the constructivist and the socio-cultural ones) and at different levels (cognitive, metacognitive, noncognitive). We also show that technology can provide matchless opportunities for dealing with most of the learning problems related to language and representations. In the section “Background” we give: •
•
•
•
A concise overview of some outcomes of research that underline the complexity of educational processes, and in particular the need for taking into account not just cognitive, but also metacognitive and noncognitive aspects; An overview of research on individual and personal learning processes related to the use of technology; A sketch of the main features of constructive and cooperative methods and their feasibility in an e-learning platform; A framework for dealing with language and representations in order to effectively interpret students’ behaviors.
In the section “Teaching and Learning Opportuinities,” we show examples of teaching activities which fulfil some of the requirements
sketched and apply some of the ideas and methods discussed there. The section “Future Trends and Conclusions” includes some discussion of the opportunities for future research. In all the examples described in this chapter we refer either to Moodle (Moodle, 2006) or to IWT (Intelligent Web Teacher, 2006). The latter is a distance-learning platform designed to lay the foundation for the next generation e-learning (for details, see Albano, Gaeta, & Salerno, 2006, or Intelligent Web Teacher, 2006).
BAckground technology and research on mathematics education Currently information and communication technology (ICT) is not strictly linked to any theoretical framework in mathematics education. This was not the case in the past, as sometimes it was naively associated to some specific cognitive framework (e.g., information-processing theory) or even to some interpretation of mathematics (e.g., computational ones). This may account for the relatively poor role played by ICT in most studies in the psychology of mathematics education. We also assume that the use of ICT is not a simple matter but requires the development of detailed teaching paths and much research to fully exploit the opportunities provided and to keep away from any potential drawbacks. Research on mathematics education, conversely, has widely shown the complexity of teaching and learning processes, and thus the inadequacy of one-dimensional models, including the belief that the simple addition of some technology to standard teaching practices could provide considerable improvements of the outcomes. In particular any model for mathematics education has to consider that students’ performances are affected by factors belonging to at least three different levels:
Integrating Technology and Research in Mathematics Education
•
•
•
The cognitive level, which involves the learning of the specific concepts and methods of the discipline, also related to the obstacles recognized by research and practice; The metacognitive level, which involves learners’ control of their own learning processes; The noncognitive level, which involves beliefs, emotions, and attitudes, and all affective aspects, which are most often critical in shaping learners’ decisions and performances.
As we will see below, ICT can play a part in each of these levels, including the noncognitive one, as it from the one hand can deeply influence learners’ beliefs, emotions, and attitudes related to mathematics, and from the other hand is itself the object of deep-rooted beliefs and can produce effects at the noncognitive level. So any study integrating ICT and research on mathematics education has to take into account noncognitive factors related to technology as well as to mathematics. In the next sections we will focus on some issues which are regarded as critical by research in mathematics education and could be dealt with in a more appropriate way with the help of an e-learning platform: constructive learning, cooperative learning, language and representations, and noncognitive implications. Of course, although we examine each of them separately, in teaching practice these issues cannot be dealt with in isolation.
individual and Personal teaching and Learning The individualisation of teaching is one of the most critical issues in instructional practice. It is well known that some instructional strategies are more or less effective for particular individuals depending upon their specific abilities. According to Cronbach and Snow (1997), the best learning
achievements occur when the instruction is exactly matched to the aptitudes of the learner. At first, we can say that individualisation regards how much the instruction fits students’ characteristics, creating learning situations suitable to different students. In particular we refer to the individualisation at the teaching level which, according to Baldacci (1999), means the adjustment of the teaching to the individual students’ characteristics, by means of specific and concrete teaching practices. Another major goal is the personalisation of the teaching, which refers to the set of activities directed to stimulate each specific person in order to achieve the maximum intellectual capability. It is clear that neither individualisation nor personalisation are possible at undergraduate level, especially with large classes of freshman students, if teaching is still based on standard lectures. The didactical transposition carried out by the teacher is based on general parameters, which arise from the average of sets of data regarding, for example, previous curriculum and knowledge, attitude to mathematics, metacognitive awareness, and so forth, and which can hardly suit the actual needs or problems of the individuals. On the contrary, the modality of blended learning, that is the support by online activities to standard lectures, seems to give a considerable contribution in the right direction. The belief that there exist teaching methods which produce the best outcomes has been long discarded, and learning is now regarded as the result of a process whose core is the pair person-situation, which is influenced by both teaching methods and individual differences (Jonassen & Grabowski, 1993). In particular the support of diversity in student’s methods is also viewed as the guide of mathematical learning (Balacheff & Sutherland, 1999). From the viewpoint of individualisation, the teaching procedures included in the platform should get the students to attain the basic skills by means of a choice of different learning paths, whereas from that of personalisation teaching activities should be planned in order to allow the
Integrating Technology and Research in Mathematics Education
students to get to excellence the student’s own way, through specific opportunities to develop cognitive potential. In order to develop each student’s specific skills of, it is necessary to let the student be free to move, to choose, to plan, and to manage some suitable cognitive situations. According to this perspective, e-learning platforms allow teachers to create learning situations appropriate for each student. In this context, the teacher, who might more properly be referred to as the author, is not just a content developer, but becomes an organizer of contexts in which the content is aimed at the attainment of well-defined goals. All this requires the author to use a range of skills, from those related to teaching to the technological ones. According to Brousseau (1997), we can say that in e-learning environment the role of the author is to prepare a-didactical situations, that are situations in which attention is paid to the students and knowledge, not to the teacher. There are no specific teaching constraints, so what the learner does is not affected by any pressure by the teacher, and knowledge system is modified as a result of adaptation processes linked to the strategies performed. Individualisation is possible as far as a choice of teaching materials, such as written texts, multimedia file, interactive exercises, and so on, is made available to the learner. The learner should be given a wide range of stimuli through different sensorial channels (auditory, visual, manipulative, and so forth) for each teaching unit, in order to make easier the adjustment of the teaching style related to the learning styles of the learners. This way the student can learn any content more easily, as the teaching modalities are more suitable to the student’s cognitive styles, allowing the student to overcome some learning difficulties. According to Balacheff (2000), “learning does not occur because of one specific type of interaction, but because of the availability of all of them. One type of interaction, or one type of agent, being selected depending on the needs of the learner at the time when the interaction is looked for, as well
as of the specific characteristics of the knowledge at stake.” (p. 2) Thus the learning paths can be individualised according the student’s profile, with particular reference to the skills which are being acquired and the learning style. This kind of individualisation/personalisation can be automatically realised by the platform or can be constructed by each student through the learning process. In fact for each teaching unit the student can ask the system for the list of the other teaching units regarding the same concept at stake. Moreover students can add personal annotations to the teaching units, which can be simple textual notes or video and audio files or figures. They have also a space to share resources among them. In such a way students interact with the learning material in a tri-dimensional relationship: they do not restrict themselves to receive and elaborate some objects (such as in the case of the book), but produce new learning objects starting from the ones placed at their disposal by the platform (Maragliano, 2000). Resources like Moodle’s lesson or IWT’s didactic unit may be the starting point to develop individualised or personalised learning paths. In that frame students are required to perform a test at the end of each unit or group of units in order to proceed to the next one. In case of satisfactory results each student will be automatically given access to the next unit, otherwise the student will be kept in the current unit or will be directed to a remedial unit. The questions included in the test may regard just the understanding of the text from the viewpoint of language, or the specific contents. Pros and cons of tests are more widely discussed in the “Self-Evaluation” section. In the perspective of personalisation, openended questions and reflection on “wrong” answers constitute the starting point of new problem situations the learner can deal with. Such opportunities allow “mistakes” to play a constructive educational role, as they can be productively used in the platform, in place of the (usually ineffective)
Integrating Technology and Research in Mathematics Education
practice of just proposing the replication of what has already been presented. According to Perrin Glorian (1994), sometimes the mistake is indeed provoked by previous knowledge, which had its owns interests and successes, which is false or inadequate in the new context. To be aware and to analyse such kind of mistakes is a fundamental step in order to construct new knowledge. The aim is not to try to avoid any possible mistake, since they are intrinsic in the process of construction of knowledge, but rather to minimise their effects, interpreting them problematically and developing the necessary awareness. Such kind of activities involve constructive processes of problem solving, of interpretation, and conversion of representations in different semiotic systems and also metacognitive aspects, such as the method used to read and understand a text. In fact an increasing number of students seems to believe that learning means being able to repeat pieces of text, obviously with the help of some keywords, without being worried to draw at least the simplest inferences from the text.
Constructive methods In mathematics education, constructive methods play a major and increasing role. An e-learning platform allows the learners to actively construct new knowledge as they interact with their environment. We are aware that some researchers adopt a more restricted definition of constructivism and they would regard some computer environments and, more generally, some ways of using ICT as inconsistent with the constructivist stance. For example, graphing a function (defined by a symbolic expression) by means of the facilities of some computer algebra system might be regarded as nonconstructive as some steps of the process are fully concealed to the learner, whereas programs explicitly computing the coordinates of a finite set of points of the graph of the function might be regarded as more suitable for a truly constructive approach. Although we understand some of
the concerns of the supporters of the restricted view, through the chapter we are adopting an inclusive definition of constructivism and focus on each learner’s opportunities to interact with the environment. Within an e-learning platform the learner can freely use a range of modules to construct his or her knowledge. Modules allowing some feedback, such as Moodle’s “lesson” or “quiz” as well as suitable IWT interactive learning objects (e.g., interactive online exercise sessions or Virtual Scientific Experiments), are specially relevant from this perspective.
Cooperative Learning E-learning platforms generally provide a number of activities involving peer interactions or interactions between learners and tutors. Modules such as Moodle’s “workshop,” “wiki,” or “task” or IWT classroom virtual space are generally suitable for designing activities of this kind. In this section we describe some experiences with a “workshop” module at undergraduate level. From the viewpoint of the theory of mathematics education, all of these activities can be framed within the so-called socio-cultural (or “discoursive”) approach. For more information see Kieran, Forman, and Sfard (2001). Our idea is to support the students by online, time restricted activities based on role-play, which actively engage them and induce them to face learning topics in a more critical way. It is well known that the cognitive processes induced by talking, discussing, and explaining to others the concepts to be learnt promote deeper level or higher-order thinking (Johnson & Johnson, 1987). In this framework we want to put emphasis on peer learning (Boud, Cohen, & Sampson, 1999), which is intended as the use of teaching and learning strategies in which students learn with and from each other without the immediate intervention of a teacher. It includes peer tutoring and peer mentoring. When the students in a group act as both teachers and learners we
Integrating Technology and Research in Mathematics Education
talk about reciprocal peer learning. This may incorporate self and peer assessment whereby students actively develop criteria for assessment. Falchikov (2001) analysed the various peer tutoring techniques and the benefits linked to each of them. She found evidence of some improvement in comprehension, memory for lecture content, performance, and facilitation in encoding and retrieval of material given by Guided Reciprocal Peer Questioning.
•
•
•
The construction of a representation within a semiotic system, such as writing a text or a formula or drawing a figure. The treatment of representations within a semiotic system, such as summarizing a verbal text, simplifying a formula, or transforming a geometrical figure. The conversion of representations from a semiotic system to another, such as verbally describing a figure, or writing a formula to represent the data of a word problem.
Language and representations The potential of information and communication technology as regards semiotic or linguistic issues is largely underestimated. Language is growing one of the most relevant issues for research on mathematics education. On one hand, classes including students from different linguistic groups pose new teaching problems. On the other hand, even at undergraduate level, a large share of students’ failures can be ascribed to linguistic issues. An increasing number of students, for example, seemingly cannot properly understand a written verbal text even if it is simple and short. A detailed investigation of language-related students’ failures is beyond the scope of this chapter. In this section we are going to focus on two aspects: Duval’s (2005) investigation of semiotic representation systems and the pragmatic interpretation of mathematical language.
Semiotic Representation Systems and their Coordination Duval’s (2005) Theory of Semiotic Representation Systems provides a new insight on the role of semiosis in learning. Algebraic symbol notation, verbal language, cartesian graphs, and geometrical figures are examples of semiotic representation systems. The main activities described by Duval (2005) are:
Duval often refers to semiotic representation systems as “registers.” We prefer to employ “register” to denote a use-oriented linguistic variety, according to the definition widely accepted in the field of linguistics. According to Duval, the main goal of education as far as semiotics is concerned is what he names the coordination of semiotic systems, which is the ability at using multiple representations of the same “object” and moving quickly from one to another. A problem involving real functions, for example, can be appropriately dealt with by the coordination of the verbal description of the function, its symbolic representation as an equation, and its Cartesian graph. The coordination of semiotic systems improves both understanding and problem solving skills. On one hand, students who can coordinate semiotic systems are allowed to distinguish a concept from its representation (which is harder, if one can deal with one representation only; on the other hand, they can adopt the best strategies provided by each representation (for example, symbolic computation of the derivative of a function or visual search for a tangent on the graph). ICT provides plenty of opportunities to use multiple representations. An e-learning platform can suggest a number of activities appropriate to the goal of achieving the coordination of semiotic systems.
Integrating Technology and Research in Mathematics Education
A Pragmatic View on Mathematical Language
teAchIng And LeArnIng oPPortunItIes
Recently, various frameworks have been proposed that underline the role of languages in the learning of mathematics. For example, Sfard (2000) interprets thinking as communication and regards languages not just as carriers of pre-existing meanings, but as builders of the meanings themselves. So, under this perspective, language heavily influences thinking. On the other hand, there is evidence that a good share of students’ troubles in mathematics, at any school level, including undergraduates, can be ascribed to the improper use of verbal language. More precisely, students often produce or interpret mathematical texts according to linguistic patterns appropriate to everyday-life contexts rather than to mathematical ones. The difference is not just a matter of vocabulary, grammar, or symbols, but it heavily involves the organization of verbal texts, their functions and relationships with the context they are produced within. Under these assumptions, a pragmatic perspective has proven suitable to provide tools to interpret students’ behaviors and to design appropriate teaching units. This means focusing on language use rather than on grammar, and regarding the interpretation of a text as a cooperative enterprise which involves not only vocabulary and grammar, but also the so called encyclopedia. An e-learning platform provides plenty of opportunities for planning activities compatible with a pragmatic perspective. It is especially suitable for planning activities aimed at improving linguistic competence, including competence in verbal language, as it allows the authors to design a wide range of communication situations and to devise tasks forcing students to use more refined linguistic resources. An application of these ideas to advanced mathematics has been discussed by Ferrari (2004). All of the activities described in the above paragraph on cooperative learning involve plenty of exchanges relevant from this perspective.
Self-Evaluation
Most of e-learning platforms provide the opportunity of designing sets of questions with automatic evaluation of the answers. The admissible formats for the items include multiple choice, true/false, matching, fill-in, cloze-procedure, short answer, and numerical answer. Apart from short answer and numerical answer items, the other formats only require the learners to select their answer out of a prearranged set and not to construct the answer themselves. This might be a critical issue. Items can be designed according to different criteria: they could be focused on one subject only, or on a whole course. In general, correct answers equipped with some comment are made available to students as soon as they have submitted their ones. Resources of this kind provide plenty of teaching opportunities, and some risk too. The item developers have to make the most of the benefits, exploiting the opportunities as much as possible, and to reduce the risks. This might make the development of the items a very troublesome business. Students might use the sets of questions individually or in groups, to get immediate feedback about some aspects of their learning. This may greatly affect not just their knowledge, but their confidence as well (the so called sense of autoefficacy). The opportunity of trying and making mistakes without the judgment of another human being may help some students to grow more confident and to develop a more positive attitude towards their products. Students could even use sets of questions as a means to learn: the interaction with the resource could be used to add some piece of knowledge. Using resources of this kind might prove somewhat risky, as some kinds of items might prove harder to develop and implement than others. For example, currently in most platforms is much easier and faster to insert word questions
Integrating Technology and Research in Mathematics Education
with little symbolic expressions and no images. Moreover, items like multiple-choice or true-false ones cannot provide complete information about students’ achievements. For example, devising a solution strategy for a problem, representing, and describing it with words involve fundamental skills that should not be overlooked. Uncritical use of test items might also induce some high school teachers or students to neglect the skills related to argumentation. Thus users should be warned that prearranged-answer items cannot provide a complete evaluation of their achievements, and opportunities to deal with open-answer items should be provided anyway. This could be achieved by means of resources allowing people to post files like Moodle’s task or “workshop” or IWT classroom shared area. Of course items of this sort cannot be evaluated automatically, but require more sophisticated patterns of evaluation. On the fall of 2006 at the University of Piemonte Orientale, some 150 biology, chemistry, and environmental sciences students have been offered more than 300 quiz items covering all the topics of the “Introductory Mathematics” course, from linear algebra to differential and integral calculus. On average each item has been dealt with by 34 individuals. More precisely, students were split into two groups. About half of them visited the platform on a regular basis and tried to answer to a fair amount of items. The other half visited the platform occasionally, made just few attempts to answer to some items, and completed, at most, one set of them. The number of students regularly visiting the platform and attempting to answer to a reasonable amount of items is far beyond our expectations. Their outcomes, although not significant from a statistical viewpoint, encourage us to go on with the experience and to expand and improve the offer for activities on the platform.
Interactions and role-Play The experience we are going to describe may be inserted in the framework of cooperative learning previously described. The experiment has been carried out in 2005-2006 in the universities of Salerno and Piemonte Orientale, both in Italy. It has been organised by selecting two working groups: an experimental group and a control group. In our setting, the subject matter has been split into various sections. For each section rounds of different activities have been planned for the two selected groups. The activities of the experimental group have been based on role-play. In each round each student has dealt with three topics: •
• •
The student acts as a teacher, so he or she devises some questions as if he or she were to evaluate someone other’s learning outcomes; The student answers to the questions proposed by a peer; The student again acts as a teacher and checks the output (both questions and answers) of two peers.
At the end of each round, the tutors revise all the files produced and made them available to all the students. The activities of the control group have been based on standard problem solving. Each member of the group was asked to autonomously solve problems provided by the staff (teacher and tutors) in a given time. Then the staff makes available solution patterns for self-evaluation. An implicit selection of a third group has arisen: the passive users of the platform, who have at their disposal lecture notes, self-evaluation tests, other materials (worked-out problems, problems with hints for solution, FAQ), and opportunity to contact the teacher, the tutors, and other students.
Integrating Technology and Research in Mathematics Education
The outcomes of the experiment have been collected at the end of the course by means of interviews aimed at understanding how the activities carried out have affected the way of studying, which progress have been noticed by the students themselves, which role (among those played) has been considered particularly useful and why. The interviews have given evidence of many benefits due to peer-to-peer activities (see, for example, Albano, 2006, or Albano, Bardelle, & Ferrari, 2007) strengthening communication skills, critical enquiry, and reflection, clarifying subject content through discussion, viewing situations from different perspectives, learning how to work as a team member, becoming actively involved in the learning process, learning to learn. In particular, looking at the benefits identified by the students for each role, we can summarize as follows. The most appreciated role has been the first one, because it has allowed them to be in the teacher’s perspective, so getting able to understand the educational goals. Moreover, to ask questions have helped to study in a more critical and deeper way, with greater care, because it is not simple to pose a question due to the fact that there is no method to do that. At the same time, the request of a certain number of questions on a topic requires to range over the entire programme, not only concentrating on the specific and restricted topic but also paying attention to all the other linked topics. It is also interesting to note that some students have used this role to make critical points clear (posing as questions exactly their own doubts). Finally we noticed some noncognitive aspects such as the trend to pose nontrivial questions, also for pride reasons, and this has required the mastery of the topics. The second role, answering questions, has been considered useful because it has allowed students to appreciate topics usually neglected. It is commonly experienced by teachers the students’ quite general assumption about questions they consider tortuous when posed at the exams
0
and this is why they fail. Some students have appreciated to receive from their colleagues some questions considered “tortuous” so that they have been forced to think about. Actually, if we see the papers produced by the students, there are no really tortuous questions, as well as there are not at the exams. Anyway the feeling of the students simply shows their familiarity with a flat and rote-learning style that is related to the lack of self-posed questions. In the same direction, we note that most of them have found questions that they did not think of before. The role of the teacher who checks the correctness is not really much appreciated, essentially for two reasons: students do not feel themselves to be equal to this task or consider the task not useful because they think they surely will do well. The role-play activities also affected students’ working methods. The students have acquired the habit of going into depth as a standard practice, and the habit of looking at something from more viewpoints (also through the comparison with other colleagues). This has changed attitudes toward studying, fostering the practice of reasoning rather than of learning by heart. The involvement in the activities proposed has given the students a sort of guidance for the organization of their study, providing time constrictions, topics to revise, and indications of the relevant activities. Finally we want to note that some students have appreciated such kind of group activity also as training for their future work. From a practical viewpoint, some management difficulties are to be mentioned. The experimental activities described require some work for revising students’ products and this has to be done in itinere as much as possible, in order to influence their further elaborations. So, on the basis of our experience, the availability of a staff, composed by a suitable number of tutors, is essential: maybe one tutor per 10-20 users could be appropriate. Of course, the coordination among the teacher and the tutors has to be taken into account.
Integrating Technology and Research in Mathematics Education
communication and semiosis The activities described in the previous section are a good example of communication that involves the adoption of different registers (i.e., use-related linguistic varieties). The students have to understand each other, but also to convey some mathematical ideas. These two tasks may require different linguistic resources, and students have to switch between informal registers, in order to communicate each other as persons, and more formal ones, in order to describe mathematical ideas. Looking at the files produced by the students during the activities, we can find a range of examples of conversion between different registers and semiotic systems. The following two questions posed by a student require conversion between an informal register and a mathematical one: “Write the Cauchy problem (in mathematical language)” and conversely “Explain by words the Cauchy problem.” We also note that, even if the students did not explicitly use graphical tools in their activities, some questions they posed involved some sort of figural representation, as shown by a question like “In the Cauchy problem, which means graphically the expression y’(x0)=y0?” which requires to switch from an analytic expression to a graphical one and then to explain by words. Furthermore, many of the students’ questions regard the interpretation of symbols in a given setting, such as “Which indicates cB(v)?”, “Which represents the column j of the homomorphism representative matrix?” If we try to trace the evolution of the use of language by the students through the activities described, we can say that at beginning the use of language is seemingly more formal, and in some sense more precise from the mathematical viewpoint. Actually, it is only a more rigid usage, due to the fact that students are not used to “talk of
mathematics” and then their questions are standard (e.g., “How a group is defined in Algebra?”) so that the answers exactly conform to some piece of a book or lecture. Going on, students try to pose questions requiring some consideration for different topics or registers or semiotic systems, with the obvious consequence that answers cannot exactly conform to the style of a textbook (e.g., “Why the main coefficients of a conic after the rotation are the eigenvalues of the original quadratic matrix?”, “Which is the relation between the rank and the determinant of a matrix?”, “What are the admissible representations of a vector space?”, “Is the intersection of two distinct planes in R3 a vector space? Justify your answer.”). The presence of non standard questions has been increasing as much as the activities have gone on, with an average of 45% on the total amount of the questions. So, for one thing, this is a good advance in mathematical thinking, for another thing, although they use a number of informal or even inaccurate expressions, students gradually improve their understanding of the meanings involved in mathematical expressions. A platform, anyway, provides plenty of opportunities for designing communication situations involving the use of a wide range of linguistic resources. More generally, ICT provides matchless opportunities for designing tasks involving conversion of semiotic systems, as defined. The following problem can be quite easily inserted as an item in different e-learning modules. 1.
Consider the function f defined for any x∈ x5 by f(x) = − x. 7 a. Find f’(x) ...... b. Compute f’(0) ......
Among the following graphs mark at least two which do not match f. Explain.
Integrating Technology and Research in Mathematics Education
A)
B)
C)
Integrating Technology and Research in Mathematics Education
D)
A problem like this (administered to science freshman students) involves conversions between formulas, graphs, and tables of values. It involves neither any advanced mathematical content nor any sophisticated use of semiotic systems, but it requires coordination of some piece of mathematical knowledge and three different semiotic systems. Problems like this are hardly proposed in standard teaching activities if they are based on paper-and-pencil or blackboard work only. Nevertheless, they provide unique learning opportunities from almost all the perspectives discussed in this chapter.
affective aspects: Students, Teachers, and mathematics The use of an e-learning platform as a support to a standard lecture-based course also affects emotional aspects. Some investigations (Albano, 2005) have strongly pointed out students’ expectations and beliefs on their relationship with mathematics and the teacher. The interviews have highlighted the importance of the role of the teacher as a tutor and as a guide for a proper use of technology. Otherwise, the computer may prove an obstacle if the work is not properly supported by the teacher, because of the risk of getting lost due to the “dispersiveness” proper of the technological tools. We underline that even from the first question the expectation of a wider contact with the teacher
has been made explicit, and it remains unwavering through the entire questionnaire. A considerable share of students actually expects an improvement in the relationship with the teacher, due to the increased opportunity to communicate provided by the technological tools. We suppose that this feeling of approaching (even if not physical) should be read as “it is beautiful to know that there is someone.” In other words they greatly appreciate that the teacher is always at hand (by e-mail for instance) if they wish or need. Through the platform the teacher is perceived closer, helpful, and so forth, and these factors have positive influence on the motivation to study, on the involvement in the course and on understanding. In almost 50% of the questionnaires, the students refer to their expectation for wider, more frequent, and easier opportunities to interact with the teacher. Such expectation is as strong as to be expressed anyway, independently from the question posed: we might be talking of either the course or their learning outcomes, or their relationship with mathematics, but in any case their expectation emerged in an almost “intrusive” way! At the undergraduate level, maybe this issue is felt as an important one because of the larger number of students per class compared to high school, which might weaken the relationship between teacher and student. So we can read their answers as a request for some contact with the teacher, who is, the students feel, remote and missing. Tools as those offered
Integrating Technology and Research in Mathematics Education
by the ICT not only make the students nearer to the teacher, but induce them to communicate in a less formal, less rigid, “warmer” way. In other words, the relationship between teacher and students becomes less asymmetrical. Note that the improvement of the quality of the relationship between teacher and students greatly influence the relationship between students and mathematics. Actually, 44% of the students claim that the ICT-support, by itself, cannot change their feelings about mathematics, but most of them think that the teacher can strongly influence their relationship with mathematics anyway. This is true of the quality of the course too: a teacher who does not love what is being taught and who does not transmit passion to students is the main, or maybe the only, factor that can “unqualify” a course. On the other hand almost 20% of the learners states that a platform can improve the quality of a course since it allows to improve the relationship teacher-student because of a “direct contact” created (18.8%). Anyway most of the learners (70%) expect to progress in mathematics learning and performance, thanks to the e-learning platform, because of the following main reasons: • Greater avalaibility of contents/investigations/doubts/tests (37.2%) • To be always in contact with the teacher (9.3%) • Course more interesting/practical/stimulant/new/involving (39.5%) • Easy, fast, deep learning (23.3%) Further investigations on such expectations have been carried out after attendance to the blended course in order to compare students’ expectations and the actual outcomes (Albano, 2006; Albano et al., 2007). It has been found that the students’ expectations have been met quite satisfactorily. The use of an e-learning platform really helps to create a relation with the teacher that is quite lacking otherwise. We would like
to underline that a teacher who uses a blended course has been considered as a teacher who takes care of the learning of students, who wants them to be successful in their learning outcomes, who wants to communicate with them. Thus, it positively affects students’ motivation and then their outcomes: seeing the background activity of the teacher on the platform (such as materials updating, asynchronous interactions by e-mails and forum, etc.) let students feel encouraged and eager to learn. Moreover, being acquainted so as to communicate with the teacher can help to reduce the exam-related anxiety, which often cannot be overcome by the mastery of the subject only. Finally, the support offered by a blended course has proved an optimal help for students who failed previous exams. The benefits they got not only affected their cognitive and metacognitive state, but also improved their relation with mathematics.
Future reseArch dIrectIons We plan to go on with research on personalisation of teaching for students with learning difficulties. Personalisation should take into account both specific content-related troubles and the student’s emotional profile. Currently platforms are often used as Learning Content Management Systems, that is, as managers of teaching resources which are labeled according to standard parameters such as kind of resource, school level, degree of deepening, size of the resource, and so on. We are going to adopt forms of labeling appropriate to keep into account the instructional context more closely (Albano et al., 2006). This affects not just the amount of subject matter to be taught, but also the teaching method. In our opinion almost all the current research streams do not explicitly deal with the emotional aspects of learning nor with the need for designing a wide range of learning paths according to the “emotional profile” of each student. As pointed out by Di Martino
Integrating Technology and Research in Mathematics Education
and Zan (2002), different attitudes profiles can be associated to a certain belief and they require different teaching actions. We mean to design teaching experiments (available to either individuals or cooperative learning groups) suitable for specific emotional profiles and to investigate on the outcomes. We also mean to investigate the viability of open-ended remedial activities. This should be done not by the teacher or the computer system but through cooperative activities promoting reflection on errors or critical points. In a roleplay context (as described earlier) each student revises the wrong answers, but often does not go beyond detecting the error and providing a proper answer. From the interviews we have gathered, it comes out that this role is the least interesting for many students, as they, more often than not, pick the proper answer from some book. An activity asking students to explain why a given answer is to be considered wrong would be much more fruitful, as it involves: • •
•
Linguistic aspects (the linguistic form of the answer) Cognitive aspects (processes of analysis and construction of knowledge, such as construction of a counterexample) Metacognitive aspects (awareness)
Activities of this sort cannot be carried out in the frame of standard undergraduate lectures. On the contrary, they can be planned and developed in a virtual place such as a platform with cooperative activities to be ended by an institutional meeting (virtual o real) with the teacher. As far as semiotic systems are concerned, and in the frame of online learning paths, we want to investigate how to create interactive, open-ended tasks engaging students in “creative” activities of construction, conversion, and treatment of semiotic representations within different semiotic systems, in the setting of multiple-representation systems such as Computer Algebra Systems or
Dynamical Geometry Systems. Actually, we already use multiple representations, but they are almost always pre-arranged by the teacher (e.g., test items involving graphs) and do not fully exploit the opportunity of asking the student to build the representations him/herself.
reFerences Albano, G. (2005). Mathematics and e-learning: Students’ beliefs and waits. In International Commission for the Study and Improvement of Mathematics Education 57 Congress, Changes in Society: A Challenge for Mathematics Education (pp. 153-157). Piazza Armerina: Università di Palermo Press. Albano, G. (2006). A case study about mathematics and e-learning: First investigations. In International Commission for the Study and Improvement of Mathematics Education 58 Congress, Changes in Society: A Challenge for Mathematics Education (pp. 146-151). Plezeň: University of West Bohemia Press. Albano, G., Bardelle, C., & Ferrari, P. L. (2007). The impact of e-learning on mathematics education: Some experiences at university level. La matematica e la sua didattica, 21(1), 61-66. Albano, G., Gaeta, M., & Salerno, S. (2006). E-learning: A model and process proposal. International Journal of Knowledge and Learning, 2(1/2), 73-88. Balacheff, N. (2000). Teaching, an emergent property of e-learning environments. The Information Society for All. (IST 2000). Retrieved October 21, 2007, from http://www-didactique.imag.fr/ Balacheff/TextesDivers/IST2000.html Balacheff, N., & Sutherland, R. (1999). Didactical complexity of computational environments for the learning of mathematics. International Journal of Computers for Mathematical Learning, 4, 1-26.
Integrating Technology and Research in Mathematics Education
Baldacci, M. (1999). L’individualizzazione. Basi psicopedagogiche e didattiche. Bologna: Pitagora. Boud, D., Cohen, R., & Sampson, J. (1999). Peer learning and assessment. Assessment and Evaluation in Higher Education, 24(4), 413-426. Brousseau, G. (1997). Theory of didactical situations in mathematics. Kluwer Academics Publisher. Cronbach, L., & Snow, R. (1977). Aptitudes and instructional methods: A handbook for research on interactions. New York: Irvington. Di Martino, P., & Zan, R. (2002). An attempt to describe a negative attitude toward mathematics. In P. Di Martino (Ed.), Proceedings of the Mathematics Views—XI European Workshop: Research on Mathematical Beliefs (pp. 22-29). Pisa: Università di Pisa Press. Duval, R. (1995). Sémiosis et pensée humaine. Peter Lang. Falchikov, N. (2001). Learning together: Peer tutoring in higher education. Falmer Press. Ferrari, P. L. (2004). Mathematical language and advanced mathematics learning. In M. Johnsen Høines & A. Berit Fuglestad (Eds.), Proceedings of the 28th Conference of the International Group for the Psychology of Mathematics Education (Vol. 2, pp. 383-390). Bergen, Norway: Bergen University College Press. Intelligent Web Teacher. (2006). Retrieved October 21, 2007, from http://www.momanet. it/english/iwt_eng.html Johnson, D. W., & Johnson, R. T. (1987). Learning together and alone: Cooperative, competitive, and individualistic. Englewood Cliffs, NJ: Prentice Hall. Jonassen, D. H., & Grabowski, B. L. (1993). Handbook of individual differences, learning and instruction. Erlbaum, Hillsdale.
Kieran, C., Forman, E., & Sfard, A. (2001). Learning discourse: Sociocultural approaches to research in mathematics education. Educational Studies in Mathematics, 46, 1-12. Maragliano, R. (2000). Nuovo manuale di didattica multimediale. Editori Laterza. Moodle. (2006). Retrieved October 21, 2007, from http://moodle.org/doc/ Perrin Glorian, M. J. (1994). Théorie des situations didactiques: Naissance, développement, perspectives. In M. Artigue, R. Gras, C. Laborde & P. Tavignot (Eds.), Vingt ans de didactique des mathématiques en France (pp. 97-147). Paris: La Pensée Sauvage. Sfard, A. (2000). Symbolizing mathematical reality into being—or how mathematical discourse and mathematical objects create each other. In P. Cobb, E.Yackel & K. McClain (Eds.), Symbolizing and Communicating in Mathematics Classrooms. Mahwah, NJ: Lawrence Erlbaum Associates.
AddItIonAL reAdIng Psychology and mathematics education Bruner, J. (1986). Actual minds, possible worlds. Cambridge, MA: Harvard University Press. Bruner, J. (1990). Acts of meaning. Cambridge, MA: Harvard University Press. Dreyfus, T. (1991). On the status of visual reasoning in mathematics and mathematics education. In F. Furinghetti (Ed.), Proceedings of the 15thConference of the International Group for the Psychology of Mathematics Education, Assisi (I), (Vol.1, pp. 33-48). Dubinsky, E. (1991). Reflective abstraction in advanced mathematical thinking. In D. Tall (Ed.), Advanced mathemathical thinking (pp. 95-123). Dordrecht: Kluwer.
Integrating Technology and Research in Mathematics Education
Sfard, A. (2001). There is more to discourse than meets the ears: Looking at thinking as communicating to learn more about mathematical learning. Educational Studies in Mathematics, 46, 13-57. Vergnaud, G. (1996). The theory of conceptual fields. In L. P. Steffe, P. Nesher, P. Cobb, G. A. Goldin & B. Greer (Eds.), Theories of mathematical learning. Mahwah: Lawrence Erlbaum Associates. Vinner, S. (1997). The pseudo-conceptual and the pseudo-analytical thought processes in mathematics learning. Educational Studies in Mathematics, 34, 97-125. Vygotskij, L. S. (1978). Mind in society: Development of higher psychological processes. Cambridge, MA: Harvard University Press. Vygotskij, L. S. (1986). Thought and language. Cambridge, MA: The MIT Press. Zan, R. (2000). A metacognitive intervention in mathematics at university level. International Journal of Mathematical Education in Science and Technology, 31, 1.
representations and Language Gombert, J. é. (1992). Metalinguistic development. Chicago: The University of Chicago Press. (Original work published 1990). Halliday, M. A. K. (1985). An introduction to functional grammar. London: Arnold.
metacognition and noncognitive Factors Morgan, C. (1998). Writing mathematically. The discourse of investigation. London: Falmer Press. Pimm, D. (1987). Speaking mathematically: Communication in mathematics classrooms. London: Routledge, Kegan and Paul.
Information and communication technology and e-Learning Anderson, T., & Elloumi, F. (Eds). (2004). Theory and practice of online learning. Athabasca University, ISBN: 0-919737-59-5. Retrieved October 21, 2007, from http://cde.athabascau. ca/online_book/ Conole, G., Dyke, M., Oliver, M., & Seale, J. (2004). Mapping pedagogy and tools for effective learning design. Computers and Education, 43(1-2), 17-33. Conole, G., & Fill, K. (2005). A learning design toolkit to create pedagogically effective learning activities. Journal of Interactive Media in Education, 08. Jonassen, D. H., Howland, J., Moore, J., & Marra, R. M. (2003). Learning to solve problems with technology: A constructivist persepective. Upper Saddle River, NJ: Merril/Prentice Hall. Keller, F., & Schauer, H. (2005). Personalization of online assessments on the basis of a taxonomy matrix. In Proceedings of the 8th IFIP World Conference on Computers in Education, WCCE, Cape Town. Retrieved October 21, 2007, from http://www.ifi.unizh.ch/ee/products/publications/ paper/WCCE05_Franziska_Keller.pdf Khalifa, M., & Lam, R. (2002). Web-based learning: Effects on learning process and outcome. IEEE Transaction on Education, 45(4), 350-3356. Koper, R., & Tattersall, C. (Eds.). (2005). Learning design: A handbook on modelling and delivering networked education and training, Springer Verlag. Kramarski, B., & Gutman, M. (2006). How can self-regulated learning be supported in mathematical e-learning environments? Journal of Computer Assisted Learning, 22(1), 24-33.
Integrating Technology and Research in Mathematics Education
Nichols, M. (2003). A theory for e-learning. Educational Technology & Society, 6(2), 1-10. Retrieved October 21, 2007, from http://ifets.ieee. org/periodical/6-2/1.html (ISSN 1436-4522) Noss, R., & Hoyles, C. (1996). Windows on mathematical meanings. Learning cultures and computers. Kluwer Academic Publishers. Tall, D., & Thomas, M. (1991). Encouraging versatile thinking in algebra using the computer. Educational Studies in Mathematics, 22, 125-147. Wiley, D. A. (2000). Connecting learning objects to instructional design theory: A definition, a metaphor, and a taxonomy. In D. A. Wiley (Ed.), The instructional use of learning objects: Online version. Retrieved October 21, 2007, from http:// reusability.org/read/chapters/wiley.doc
Chapter IX
AI Techniques for Monitoring Student Learning Process David Camacho Universidad Autonoma de Madrid, Spain Álvaro Ortigosa Universidad Autonoma de Madrid, Spain Estrella Pulido Universidad Autonoma de Madrid, Spain María D. R-Moreno Universidad de Alcalá, Spain
ABstrAct The evolution of new information technologies has originated new possibilities to develop pedagogical methodologies that provide the necessary knowledge and skills in the higher education environment. These technologies are built around the use of Internet and other new technologies, such as virtual education, distance learning, and long-life learning. This chapter focuses on several traditional artificial intelligence (AI) techniques, such as automated planning and scheduling, and how they can be applied to pedagogical and educational environments. The chapter describes both the main issues related to AI techniques and e-learning technologies, and how long-life learning processes and problems can be represented and managed by using an AI-based approach.
Copyright © 2008, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
AI Techniques for Monitoring Student Learning Process
IntroductIon The e-learning (Clark, 2001; Kozma, 1991; Meyen et al., 2002) research field has become a hot topic in recent years. Many educators have seen it as a way to re-use previous courses stored in a database, or in other electronic formats (Schmitz, Staab, Studer, Stumme, & Tane, 2002), and to give flexibility to existing ones. Moreover, the increasing computing power and the available network infrastructure allows sharing and distributing these courses among public institutions and private corporations. These new educational approaches are evolving to use the new information technologies, and the Internet, as a virtual platform where all the involved people can implement new ways of communication. Current e-learning techniques are modifying the traditional learning environment with a classroom, desktops with students, and a blackboard. These new techniques offer individualised contents and learning methodologies, which traditional courses cannot provide, and allow advanced learners to speed through or bypass contents that are redundant, whereas beginners slow down through them (Small & Lohrasbi, 2003). The progress made by each student can be monitored in order to determine the main problems that the students face when studying the units of a course. By knowing those problems, it is possible to propose e-learning activities that can improve the quality of the learning process and, as a consequence, improve the learning designs. A learning design (LD) can be defined as an application of a pedagogical model for a specific learning objective, a target group, and a specific context or knowledge domain (Koper & Olivier, 2004). Different systems have been implemented to help course designers to specify and implement LDs. Two examples are the open-source system learning activity management system, or LAMS (LAMS, 2006), or the course management system Moodle (Moodle, 2006), which supports sequences of activities that can be both adaptive
0
and collaborative. The different research works in the e-learning area led to the development of the IMS Learning design specification which is currently used as a standard format for learning designs (IMS LD, 2006). This specification is based on a metalanguage which allows modelling learning processes. In IMS LD model concepts like roles, activities, or environments are defined for describing learning designs. In higher education, the increasing tendency is to create virtual learning environments (VLE) which are designed to facilitate teachers the management tasks of educational courses for their students. This increasing number of platforms, systems and tools related to virtual education has led to the creation of different e-learning standards. These standards, such as SCORM (2006), have been developed to facilitate the utilization (and reutilization) of teaching materials (through the definition and creation of learning objects). Currently, these technologies and standards are mature enough to incorporate innovative techniques that could provide new mechanisms to deal with learning processes. The new virtual learning environments provide an interesting field for different kinds of researchers. We will focus on artificial intelligence (AI) researchers that can experiment with their automatic problem solving algorithms, or develop and design new algorithms in this complex domain; and educational researchers that can use a new kind of tools and techniques that could aid to detect, reason, and solve (automatically) deficiencies detected in their initial learning designs. One of the areas of AI most suitable to be applied within this context is the automated planning and scheduling. Planning techniques generate a plan (sequence or parallelization of activities) that achieves a set of goals given an initial state and satisfies a set of domain constraints represented by operators schemas. In scheduling systems, activities are organised along the time line by having in mind the resources available. These systems can perfectly handle temporal reasoning
AI Techniques for Monitoring Student Learning Process
and resource consumption, together with some quality criteria (usually centred around time or resource consumption) but they cannot produce the required activities and their precedence relations given that they lack an expressive language to represent activities. These techniques have been applied with success in different real (and complex) environments such as industry, robotics, or information retrieval. Of special interest in the last few years has been the development of autonomous architectures (Muscettola, Dorais, Fry, Levinson, & Plaunt, 2002) that can carry out a large number of functions, such as tracking a spacecraft’s internal hardware or rover’s position, ensuring the correct working, and repairing when possible, without (or little) human intervention. In these new operation models, scientists and engineers communicate the high-level goals to the spacecraft or to the rovers, which are translated into planning and/or scheduling sequences. Then, a continuous status checking is performed in order to detect any damage, and, finally, the plan is executed. These systems must also have the capability to understand that the errors occurred during the process of accomplishing the goals. This chapter shows how this kind of AIbased techniques can be appropriately used into e-learning, and more specifically into virtual education or VLE domains (Sicilia, SánchezAlonso, & García-Barriocanal, 2006). We will apply these techniques to a specific e-learning tool called Task-Vased Adaptive Learner Guidance on the Web (TANGOW) developed by some of the authors of this chapter (Carro, Pulido, & Rodriguez, 1999b).
rePresentAtIon FormALIsms In LeArnIng domAIns In this section we will describe different formalisms that have been used in e-learning systems to represent (a) the learning area (domain model) which includes the course concepts and the re-
lationships between them, and (b) the current situation of a given learner with respect to the whole learning process. These models will be later considered, by using a particular e-learning tool, to understand how traditional AI techniques can be incorporated into a particular e-learning system. Several standards and guides have been proposed related to learning object metadata, student profiles, course sequencing, and so forth. The IEEE Learning Technology Standards Committee (LTSC, 2006) has developed the learning object metadata (LOM, 2006) standard which specifies the attributes required to describe a learning object, where a learning object is defined as any entity, digital or nondigital, which can be used, re-used or referenced during technology supported learning. Relevant attributes of learning objects to be described include type of object, author, owner, terms of distribution, format, and pedagogical attributes, such as teaching or interaction style. The standard also defines how LOM records should be represented in XML and RDF. Promoting Multimedia Access to Education and Training in European Society (PROMETEUS) tries to apply the IEEE LTSC standards into Europe context and cultures. Another specification which allows the modelling of learning processes is the learning design (LD) information model (IMS LD, 2006) from the IMS Global Learning consortium. A learning design is a description of a method enabling learners to attain certain learning objectives by performing certain learning activities in a certain order in the context of a certain learning environment. LD is designed to integrate with other existing specifications. Among these, it is worth mentioning the IMS content packaging (IMS CP, 2006), which can be used to describe a learning unit. A learning unit can have prerequisites which specify the overall entry requirements for learners to follow that unit. In addition, a learning unit can have different components such as roles and activities.
AI Techniques for Monitoring Student Learning Process
Roles allow the type of participant in a unit of learning to be specified. There are two basic role types: learner and staff. Activities describe the actions a role has to undertake within a specified environment composed of learning objects. LD also integrates the IMS simple sequencing (IMS SS, 2006), which can be used to sequence the resources within a learning object as well as the different learning objects and services within an environment. Content is organized into a hierarchical structure where each activity may include one or more child activities. Each activity has an associated set of sequencing rules which describe how the activity or how the children of the activity are used to create the desired learning experience. The learning process can be described as the process of traversing the activity tree, applying the sequencing rules, to determine the activities to deliver to the learner. The general format of a sequencing rule can be expressed informally as: if condition set then action. There may be multiple conditions. Conditions may be combined with a single and combination (all conditions must be true) or a single or combination (only one condition must be true). Individual condition values may be negated before being combined in the rule evaluation. The U.S. Federal Government Advanced Distributed Learning (ADL) initiative has also proposed a model called shareable courseware object reference model (SCORM) which describes how the Department of Defense will use learning technologies to build, and use the learning environment of the future. The standard defines what is called “Learning Object Metadata,” which is a dictionary of tags that are used to described learning content in a variety of ways. For a given learning object, these metadata describe, for example, what the content is, who owns it, what costs (if any), technical requirements, educational purpose, and so forth. The order in which learning objects are presented to the learner is specified by using sequencing rules. A sequencing rule has the format if condition then action, where condition
can be chosen among, for example, “completed,” “score less than,” or “time limit exceeded.” On the other hand, an action could be, for example, “skip,” “disable,” or “hide from choice.” The Aviation Industry CBT (Computer-Based Training) Committee (AICC) is in charge of developing guidelines for the aviation industry in the development, delivery, and evaluation of CBT and related training technologies. In addition to these standards, there are other specific proposals such as the GRAFCET representation formalism described in M’tir, Jeribi, Rumpler, and Ghazala (2004), which uses a graph to represent the sequences of course concepts and the possible learning itineraries. Other existing work (Ahmad, Basir, & Hassanein, 2004) uses fuzzy logic to relate attributes in the learner module and concepts in the domain model. The motivation for the use of fuzzy logic is that it is appropriate for representing and reasoning with vague concepts and that the formalisation of the level of understanding of a given concept by a learner is an inherently vague process.
AutomAted Processes In e-LeArnIng tooLs Most of the current VLE contain prefixed courses where the user navigates and learns the concepts that they have been planned for. Some e-learning tools include situation learning (SL) courses where the user is presented with different predefined situations where the user has to choose among different options. The drawback of this type of course is that nothing is dynamically generated and a lot of effort is required to create challenging situations that keep the user’s attention. Although the instructors can get statistics as well as other information about the student progress, there is still a lack of feedback among the previous users, the tool, the instructors, what the user is interesting in, and the future users. Among the tools that have worked on this direc-
AI Techniques for Monitoring Student Learning Process
tion we can mention the CourseVis system (Mazza & Dimitrova, 2003) and the dynamic assembly engine (Farrell, Liburd, & Thomas, 2004). An approach for automatic course generation (in some ways similar to the one presented in this chapter) is the work of Ulrich (Ullrich, 2005) who uses an AI hierarchical task network (HTN) planner called JSHOP (Ilghami & Nau, 2003) which assembles learning objects retrieved from one or several repositories to create a whole course. Our approach not only can link learning objects, but also schedule them along a period of time and consider previous student results to generate different learning designs. Since our goal is to monitor the learning process in TANGOW, the next subsections present a review of the main existing techniques: AI Planning and Scheduling.
•
•
IntroductIon to PLAnnIng technIQues Planning can be defined as the sequence or parallelization of activities that, given an initial state, achieves a set of goals and satisfies a set of domain constraints represented as operators schemas. Using a high level description, the inputs of these systems are shown in Figure 1:
Domain theory: The STRIPS (Fikes & Nilsson, 1971) representation is one of the most widely used alternatives. A world state is represented by a set of logical formulae, the conjunction of which is intended to describe the given state. Actions are represented by the so-called operators. An operator consists of preconditions (conditions that must be true to allow the action execution), and post-conditions or effects (usually consisting of an add list and a delete list). The add list specifies the set of formulae that are true in the resulting state, while the delete list specifies the set of formulae that are no longer true and must be deleted from the description of the state. A course can be defined in terms of a set of learning activities that are performed by students. Problem: Described in terms of an initial state and a goal. The initial state is represented by logical formulae that specify the situation for which a solution is being looked for. Examples of initial states in a learning environment would be the students previous knowledge, the resources that a course uses and the time period when they are available, and so forth. Goals are often viewed as specifications for a plan. In a learning environment, a possible goal would be that the student is able to apply critical thinking to a specific subject.
Figure 1. Inputs and outputs of an AI planner
Domain description
Problem description
Planner
Control rules
AI Techniques for Monitoring Student Learning Process
•
Some AI planners include a third input referred to as control knowledge. It could guide the solver to the right alternatives of the search tree potentially avoiding backtracking and arriving straight forward to the solution.
•
• As an output, planners generate a plan with the set of operators that achieves a state (from the initial state) that satisfies the goals. The main AI planning techniques are described next: •
•
•
•
A total order (TO) planner generates solutions that are sequences of total ordered actions. The basic structure is a tree where nodes can be plans or states, and edges are actions or state transactions; then any search algorithm can be applied. In a Partial Order planner, nodes represent partially specified plans, and edges denote plan-refinements operations such as the addition of an action to a plan. The planning algorithm commits to only the essential ordering decisions. There is no need to prematurely commit to a complete, total sequence of actions. A Graphplan planner alternates between graph expansion and solution extraction. The graph expansion extends the plan graphs forward until it has achieved a necessary condition for plan existence. The solution extraction phase performs a backwardchaining search on the graph, looking for a plan that solves the problem. If no solution can be found, the cycle repeats the expansion of the planning graph. A heuristic search planner (HSP) transforms planning problems into problems of heuristic search by automatically extracting heuristics functions from STRIPS encoding instead of introducing them manually. The bottleneck is the computation in every new state of the heuristic from scratch.
•
•
An SAT-based planner takes a planning problem as an input, guesses a plan length, and generates a set of propositional clauses that are checked for satisfiability. After the translation is performed, fast simplification algorithms are used to solve the problem. An HTN planner uses tasks networks and tasks decomposition (methods). A task network is a collection of tasks that need to be carried out, together with constraints on the order in which tasks can be performed. The basic algorithm is to expand tasks and resolve conflicts iteratively, until a conflictfree plan can be found that consists only of primitive tasks. A Markov decision process (MDP) is defined by an initial distribution over the states, the dynamics of the system with states annotated by different possible actions, the probabilistic state transitions, and a reward function to make the transition from one state to another. This kind of techniques requires full enumeration of all possible states what can make it intractable in most of the planning systems. Most of the work in this area has focused on using only a subset (the most probable state space) or abstractions of the state space. A contingent plan refers to a plan that contains actions that may or may not actually be executed, depending on the circumstances that hold at the time. Another way to handle uncertainty is by applying probabilistic planners which use probabilities of the possible uncertain outcomes to construct plans that are likely to succeed.
IntroductIon to scheduLIng technIQues Scheduling can be defined as the process of organising activities along the time line by taking into account the resources available. Many
AI Techniques for Monitoring Student Learning Process
techniques used in the area of scheduling systems come from the operational research (OR) area (i.e., branch and bound, simulated annealing, lagrangian relaxation). Lately, constraint programming (CP) has been applied to the different scheduling problems with very good results, that is, job-shop scheduling and the RCPSPmax problem (Kolisch & Hartmann, 1999). A RCPSPmax consists of a set of activities where two kinds of constraints can be interrelated: •
•
Precedence constraints that impose the restriction that an activity cannot start before its predecessor activities, and Resource constraints among activities that consume the same resource due to the limited capacity of the resource itself.
The objective is to find precedence and resource assignments for all the activities in the horizon imposed. Figure 2 shows a simple example of a Job-Shop Scheduling problem with two resources: resource R1 with a capacity of 2, and resource R2 with a capacity of 3. The left part of the figure shows the precedence constraints among activities and the resources that each one requires. The right part of the figure shows the solution to the problem. Since R1 has a maximum capacity of 2, the 3 activities that consume this resource cannot be performed in parallel. Then, the scheduler will add a precedence constraint to one of them. Resource R2 has a capacity of 3 but none of the activities that require this resource
have to be executed in parallel, so there are no conflicts. If we impose a deadline of 5 time units to the original problem (we consider that each activity has a duration equal to 1), the solution given by the scheduler will be also time consistent. However, a value lower than 5 will make the solution inconsistent. Then, scheduling techniques can be easily generalized and applied to a learning environment. In this case, instead of having machines and jobs (Job-Shop Scheduling problem), we have students, educators, and learning units (LU) in courses. Each learning unit (operation) needs to be processed during a period of time for a given student (machine), and the unit will be supervised by an educator. The course will also have a limited duration (deadline). Each instructor will have a maximum number of students (we consider an instructor as a resource with a total resource capacity given by the number of students the instructor is able to advise). We need to know the initial and end time of each LU considering precedence constraints among them. The variable values are imposed by the problem conditions: learning activity durations, course duration, number of learners, and so forth. AI integrated planner and scheduler systems generate a plan or a set of plans if a solution exists for the given deadline. A plan can be seen as a sequence of operator applications (learning activities) with a specific duration that can lead from the initial state to a state in which the goals are reached with the resources available.
Figure 2. A job-shop scheduling example with two resources R
R
Resource capacity
R
R R
R
R
R
R
R R
R
R
R
R R
AI Techniques for Monitoring Student Learning Process
In a specific learning design, we need to impose a deadline, that is, the total duration of the course, and the resources that are available, that is, the number of educators. Then, it is up to the educator and the pedagogical responsible to study the best way to distribute the number of hours and their contents among the different units in order to assure the quality of the education process. This task can be done automatically by applying Planning and Scheduling techniques to a new domain: elearning environments. Although the process will be explained in detail in the following sections, the basic idea is to change some parameters and to use the feedback from the students that have already followed the course.
tAngow: A tooL For vIrtuAL educAtIon TANGOW facilitates the development and deployment of adaptive courses. In these courses the contents, the navigational options, and the flexibility of the guidance process are adapted to both the user features and their actions while interacting with the course. Adaptivity is an important feature because the lifelong learning philosophy is growing in importance in many environments and the same virtual education course can be accessed by students with different backgrounds, age, and interests. Teachers can describe adaptive courses by means of tasks and rules. Tasks are the basic units in the learning process. They include topics to be learned, exercises to be done, examples to be observed, and so forth, that is, tasks to be performed in order to learn or put into practice the concepts or procedures involved within the course. Rules specify the way of organizing tasks in the course along with information about the task execution (order among tasks—if any—free task selection, prerequisites among tasks, etc.). Figure 3 presents an example of the (partial) structure of a TANGOW course on operative
systems, as taught to second year students of a computer science degree. The current version of the course is composed by a number of tasks representing theory units and also examples. The example shows how tasks are decomposed into subtasks according to specific rules. The complete course consists of four tasks: operative system overview, operative system concepts, distributed systems and security. These tasks are combined through an AND rule, which states that all the subtasks must be completed exactly in the order in which they are listed in the rule. The operative system overview task is divided into services, security, and architecture subtasks, combined through an OR rule. The OR rule dictates that, in order to complete the composed task, at least one of the subtasks must be completed, in any chronological sequence. Differently, the operative system concepts task is decomposed, through an ANY rule, into the process, memory, scheduling, input-output, and file subtasks. An ANY rule means that all the subtasks must be completed but the order is not relevant. Following with the course, it can be seen that, in order to execute the Scheduling task, the student has to sequentially execute the scheduling principles and scheduling algorithms tasks. Scheduling algorithms consists of the study of five specific algorithms in any order. Finally, the task round-robin algorithm can be learned by reading either the description or any of the three examples provided. Besides the correct sequencing, the rule may state conditions that must be fulfilled in order for the rule to be applied. These conditions are expressed by means of attribute values regarding user features, such as personal characteristics (age, language, experience, etc.), learning style (visual/verbal, intuitive/deductive, etc.) (Paredes & Rodriguez, 2002), preferences (type of information desired, learning strategy, etc.), and actions while interacting with the course (tasks visited, exercises performed, results obtained in the tests, time spent in every task, and so on). The latter type of attributes are called “dynamic attributes”
AI Techniques for Monitoring Student Learning Process
Figure 3. Partial example of a TANGOW course structure Operative System course Operative System Overview Services
or
Security overview Architecture
Operative System Concepts Processes Memory
And
Scheduling
And Any
Scheduling principles Scheduling algorithms FCFS
Any
Round-Robin Description
or
Example Example Example
SPN SRT HRRN Input-Output Files Distributed Systems Security
as they must be calculated during the student interaction. In this way, prerequisite relationships between tasks, for example, can be specified in rule activation conditions. Moreover, an educator can describe different course structures and dependencies for different students, by specifying several alternative rules for the same composed task. When a student selects the new task to be carried out, the systems looks for an appropriated rule (that is, a rule whose conditions are true for the current student) describing how the task must be decomposed. Figure 4 shows an example of three different rules describing possible decompositions for the operative system overview task, each one suitable for students with different characteristics. In this case, the considered feature is whether the student is preparing to become an end user, an operating system designer or an application programmer.
Because rules can depend on dynamic attributes, the decomposition of the next task can only be computed just on time when the student selects the given task. In this way, the TANGOW system consults the course description and the data about the student and generates, step by step, a personalized course for each student, adapting the different course aspects to each student.
Authoring a tAngow course When designing an adaptive course for TANGOW, the first step is to establish the user features to be considered for the adaptation. The attributes selected to be used in rule conditions compose the user model. These data are stored in the student database along with the log files containing the sequence of actions performed by the students. For example, regarding the operative system
AI Techniques for Monitoring Student Learning Process
Figure 4. Task decomposition based on student characteristics
end user
Operative System Overview Services
And
Security overview
os designer
Operative System Overview
Architecture overview
Services
or Application Programmer
Operative System Overview
Any
Security overview Architecture
Services Security overview
course analyzed in Figure 4, the relevant student feature is the student role (end user, application programmer, OS designer). The list of visited tasks will also be needed, as there are prerequisite relationships between some of them (for example, processes is required for scheduling algorithms, Figure 3). Afterwards, the designer describes the adaptive course itself (Carro, Ortigosa, & Schlichter, 2003; Carro, Pulido, & Rodriguez, 1999a) by specifying the tasks and rules that will be part of the course, as well as the contents (generally HTML files) associated with each task and used for page generation. The designer can specify different variations of any of these aspects in
order to adapt the course to student features and actions. Each task will be defined as atomic or composed. Atomic tasks will be the leaves of the task tree, while composed tasks are the inner nodes and will have one or more rules describing how it is decomposed into subtasks. Table 1 shows some rules describing how composed tasks should be divided into subtasks.
tAngow Logs While the student is interacting with the course, all of the student’s actions are logged. This log stores information about the tasks the student has
Table 1. Examples of rules for the operative system adaptive course
task name
conditions
subtasks
sequencing
…
…
…
…
‘Operative System Overview’
role = ‘End user’
‘Services’, ‘Security Overview’, Architecture Overview’
AND
‘Operative System Overview’
role = ‘Application programmer’
‘Services’, ‘Security Overview’
ANY
‘Operative System Overview’
role = ‘OS Designer’
‘Services’, ‘Security Overview’, ‘Architecture’
OR
…
…
…
…
AI Techniques for Monitoring Student Learning Process
Figure 5. A portion of a TANGOW log file
<user-model> ...
...
visited, the corresponding time stamps, the level of completeness, and the score obtained, when it applies. Figure 5 displays a partial example of a student interacting with the adaptive course. Log files enable the course designer to retrace the interaction of the student with the system/ course, and can be used with different goals. For example, Ortigosa and Carro (2003) use the information contained in the log files to provide the course designer with information about possible problems or improvement opportunities within the adaptive course.
monItorIng tAngow courses: A cAse study The e-learning methodology proposed uses automated reasoning techniques, such as planning and scheduling, to automatically learn from possible mistakes in the learning design process and to look
for new solutions (Camacho & R-Moreno, 2007; R-Moreno & Camacho, 2007). This methodology has been implemented in a planning and scheduling (P/S) system called IPSS (R-Moreno, Oddi, Borrajo, & Cesta, 2006). In this section, details will be presented about how the IPSS and the TANGOW systems have been integrated and the advantages obtained with this integration.
tAngow & IPss Integration In IPSS (R-Moreno et al., 2006), reasoning is divided into two levels. The planner module (IPSS- P) focuses on the action selection, and the scheduler module (IPSS-S) on the time and resource assignment. Figure 6 shows in more detail how the different modules (layers) of IPSS interact. Since our planner is a total order planner (the solution is a sequence of activities), it is not enough to look for a solution that minimises the time and the resources. We use a de-order algo-
Figure 6. The IPSS architecture
AI Techniques for Monitoring Student Learning Process
rithm (Bacstrom, 92) to eliminate the unnecessary causal links. Then, the solution generated by the de-order algorithm is given to the temporal and resource reasoners. During the search process, every time the planner chooses an operator, it consults the scheduler for the time and resource consistency. If the resource-time reasoner finds the plan inconsistent, then the planner backtracks. If not, the operator is applied, and the search process continues. To know more details about the algorithm, readers can refer to (R-Moreno, 2003; R-Moreno et al., 2006). Figure 7 shows the architecture of the system that results from the integration of the IPSS and the TANGOW systems. The monitored learning process can be described as follows:
means that the task is considered as essential in the learning process). c. The teacher selects the number and kind of dimensions that will be used to generate the metadata. In our example, the teacher has selected the knowledgestudent level (end user, application programmer, OS Designer). The students interact with TANGOW. These interactions generate different logs that will be stored in the system. By using the above-mentioned information (students logs, teaching tasks, rules, task priorities, time estimation) the metadata is generated based on both logs and educator estimations. The metadata is appropriately mapped into an IPSS representation. This mapping process generates the domain and the initial problem that will be used by IPSS to solve the defined problem. IPSS looks for solutions that solve possible problems existing in the initial learning design by taking into account the teaching
2.
3.
4. 1.
The educators define: a. The teaching tasks and rules to build the adaptive course (by using the TANGOW tool). b. The educator assigns to each task both a priority and a time estimation associated to the task (a high priority
5.
Figure 7. IPSS integration for a specific TANGOW course TANGOW Student Data Course Contents
<stud-type, task, dur-min,dur-max>
Dimensions
Educators
Sol novice Sol normal
Priority/Time module (teaching Tasks)
Sol advanced
Solutions found (proposals)
0
IPss Planner/Scheduler
(End user, Appl. programmer, OS Designer )
P/S Problem
Log-based Metadata Priority-based Metadata
Students
P/S Domain
Rules
P/S metadata-based mapping
Teaching Tasks
Student Logs
AI Techniques for Monitoring Student Learning Process
6.
task decomposition, their priorities and estimated duration time. For each kind of student (end user, application programmer, OS designer) a plan is generated with a possible scheduled course.
The following subsections describe the previously listed processes in detail by using the TANGOW course example of Figure 3.
From tAngow metadata to P/s-Based Problems As mentioned, the planner domain theory contains all the actions represented by operators. The language for describing an IPSS domain theory is based on an augmentation of the representation originally proposed by Fikes and Nilsson (Fikes & Nilsson, 1971). Since this representation is quite restrictive, it has been extended to allow disjunctive preconditions, conditional effects and universally-quantified preconditions and effects, quality metrics, durations, time and resource constraints, and continuous values.
The first step for defining a domain consists of identifying the operators and the object types that are needed in the domain (for declaring the type of each operator variable). Types can be defined as structured in a hierarchy. A special type, the infinite type, allows representing continuous valued variables, while finite standard types represent nominal types. In our domain, we have, among others, the following types: STUDENT, ROLE, SUBTASK, COURSE, DURATION, PRIORITY, and TIME. Variables of type STUDENT instantiate to the possible student stereotypes. The variable ROLE represents the relevant user features (end user, application programmer, OS designer) and COURSE represents the courses that we want to track. Finally, DURATION, PRIORITY, and TIME are infinite types that allow us to handle numerical values needed to calculate the duration and priorities of each task. By following the TANGOW example described in Section 4 (Figures 3, 4, and 5 and Table 1), we can obtain the needed metadata for IPSS (see Table 2). The IPSS operator in Figure 8 is composed of the following fields:
Table 2. TANGOW metadata
…
…
‘Operative System Overview’
role = ‘End user’
‘Operative System Overview’
role = ‘Application programmer’
‘Operative System Overview’
role = ‘OS Designer’
…
…
Duration (min,max)
conditions
Task
task name
Priority
subtasks sequencing
…
…
‘Services’
,
‘Security Overview’
,
‘Architecture Overview’
,
‘Services’
,
‘Security Overview’
,
‘Services’
,
‘Security Overview’
,
‘Architecture’
,
…
AND
ANY
OR
…
AI Techniques for Monitoring Student Learning Process
• • • •
Params field: Contains a list of the variables whose values will be printed out through the user interface when a solution is found. Preconds field: Ccontains the preconditions of the operator. Effects field: contains the add and delete effects of the operator. Constraints field: Contains the temporal constraints.
The symbols within < > are variables that are instantiated during the problem solving process. The operator in Figure 8 has two preconditions: (compose-subtask End_User ) and (student-role <st> ) and three add effects (done <st> ), (done <st> ) and (done <st> ). This operator, called “T_EU_OperatingSystem_Overview,” corresponds to the “Operating System Overview” task under the “End User role” condition in Table 2 which is composed of
“Services,” “Security Overview,” and “Architecture Overview” substasks. These subtasks are represented within the operator as the variables , and . The variable will be instantiated by the End_user value. The variables , , and represent priorities and can take numbers as values. We need to use the gen-from-pred IPSS function to constraint the values that these three variables can take. This function generates a list of values as the bindings for a variable by using the information on the current state. In this example, in the case of , gen-from-pred returns the list of values {x} greater or equal to 1 such that the literal (prioritySe End_user Services x) is true in the current state. The variables , , and represent task duration. As the values they can take are lists of two elements corresponding to the minimum and the maximum duration, we need
Figure 8. An IPSS operator corresponding to the “operating system overview” task
AI Techniques for Monitoring Student Learning Process
to use the constraint-from-pred IPSS function to constraint the values that these three variables can take. This function generates a list of values to be possible bindings for the corresponding variable by using the information of the current state referred to the duration of each task that compose a specific task. IPSS will choose a possible integer value in that range during the problem resolution. In order to implement static properties efficiently, IPSS allows user-defined functions to represent them. In our example, we have defined the function diff, that allows us to calculate if the objects passed as arguments are different; and the function calculate-total-duration, that permits encoding the function: time= dur+dur1+dur2. The second input to the planner is the problem to be solved, described in terms of an initial state and a set of goals to be achieved. The description of an initial state is composed of a list of objects and their corresponding types together with a set of instantiated predicates (i.e., literals) that describe the configuration of those objects. The objects in the state must be instances of the types that are declared in the domain. Figure 9 shows some initial conditions and two goals corresponding to two student stereotypes. The goals are that learner1 whose student role is end user and that learner2 whose student role is application programmer must learn the Operative System course.
monitoring tAngow courses Once the metadata is generated, there are two different planning/scheduling possibilities. On one hand, it could be the first time that this course is executed. In this situation, the planner does not have information about how much time is necessary for a particular task (we use the educator time estimation). The system will propose a schedule that can be dynamically modified once the interactions with the students provide the initial time durations for the tasks (a teaching task that requires more than the scheduled time
will force to modify the subsequent tasks, see Figure 10). Figure 10 shows the execution of different teaching tasks proposed by the planner. Any task could be under or over estimated depending on external factors (students skills, labs/classrooms availability, and so forth). Although the total duration (i.e., makespan) of the course is fixed, IPSS can replan the remaining tasks in order to fit them in the remaining time. IPSS will either increment/decrement the duration of the tasks based on the priorities, or add/delete subtasks based on the AND, OR, and ANY rules. Figure 9 shows a possible course execution scenario, in which Task1 was under estimated by the educators (when accessing the course, it took longer to students to execute the task). The new time duration (including a time increment) will be managed by the planner/scheduler in the next cycle to adjust the time duration of the rest of tasks that have not been executed yet. The planner/scheduler main objective is to execute all the teaching tasks in the available time. For this reason, over estimations (i.e., Task 2) and under estimations (i.e., Task 1) will be sequentially used in every planning/scheduling cycle to (dynamically) adapt the available time. This adaptation could result in the addition or removal of low priority teaching tasks. On the other hand, if a particular course has been executed several times, we can access the student logs to look for a <min, max> estimation for the time duration of a particular task. If the majority of students a task requires less time than estimated by the educator, this means that the next time the students start the course we can consider that duration as the baseline duration. This information will be given to IPSS that will adjust the whole course based on that change. In this way, IPSS will provide a better initial solution in the next course execution. The same will occur if the time assigned to a task by the educator is underestimated. From the log files we can extract that information and translate it into IPSS to reschedule the whole course. If a rescheduling is needed, two things can happen:
AI Techniques for Monitoring Student Learning Process
Figure 9. An example of IPSS initial conditions (objects (learner1 learner2 STUDENT) (End_user Application_programmer OS_Designer ROLE) (Services Security_Overview Architecture_Overview SUBTASK) (Operating_System COURSE)) (state (and ; SubTasks that compose the tasks depending on the student role (compose-subtask End_user Services Security_Overview Architecture_Overview) (compose-subtask Application_programmer Services Security_Overview) ;... ; Features for each student sterotype (student_role learner1 End_user) (student_role learner2 Application_programmer) ; Priotities for each substask (prioritySe End_user Services 1) (prioritySO End_user Security_Overview 2) (priorityAO End_user Architecture_Overview 3) (prioritySe Application_programmer Services 1) (prioritySO Application_programmer Security_Overview 2) ; ... ; Durations for each substask (durationSe End_user Services (2 4)) (durationSO End_user Security_Overview (3 6)) (durationAO End_user Architecture_Overview (5 7)))) (durationSe Application_programmer Services (2 3)) (durationSO Application_programmer Security_Overview (2 4)) (durationSe OS_Designer Services (2 3)) ; ... (goal (and (Learn learner1 Operative_System) (Learn learner2 Operative_System)))
Figure 10. Sequential teaching tasks execution Time Available Time duration task1: task., task.,…
Time estimation
Time duration task2: task., task.,…
Time estimation
task3
taskn-1
taskn
AI Techniques for Monitoring Student Learning Process
Figure 11. Cyclic TANGOW course execution on ati
tim
e
e
TANGOW course (nd ex)
s e: tim sk
sk
er
(ta
(ta
h ac : te
)
m sti
t lo en tud
TANGOW course (st ex)
gs
…
)
T
Nth ex
Tn
TANGOW course (rd ex) th ex
th ex
(ta
all the tasks can still be fit into the total time assigned to the course or they cannot. In the latter case, IPSS will have to eliminate one or more tasks from the course. In both situations, with and without previous students information, IPSS can use the available information (in the first case a worse time assumption will be assumed) to schedule tasks. Figure 10 shows how the cyclic execution of a particular course can be used by IPSS to iterative improve the quality of the plans. A proposed schedule will have a higher quality if it is better adapted to both the students characteristics and the available time.
the IPss Final solution Finally, IPSS generates a plan with the sequence of operators that achieves a state (from the initial state) that satisfies the goals and their start and end times. Figure 12 shows the solution generated from the initial conditions of Figure 9. In the solution, IPSS instantiates each operator (that is, each task in TANGOW) by giving value to the task starting and ending times, and to the subtask list that compose the task. The fact that some tasks (operators
sk
tim
e
h ac : te
er
on ati tim s e
&
s… log
)
instantiated) have prerequisite relationships with others already performed, imposes the restriction that the starting time of a task must be later in time to the ending time of its prerequisite tasks. This is the case of the “T_EU_OperatingSystemConcept” and “T_EU_OperatingSystemOverview” tasks in Figure 11. The starting time of “T_EU_OperatingSystemConcept” is equal to the ending time of “T_EU_OperatingSystemOverview”. These tasks belong to the tasks that have to be performed by the student with the “End User” role (see Table 2). Since the Sequencing is an “AND,” all the tasks must be performed sequentially. But these prerequisite relationships may not exist between other tasks such as the “T_AP_OperatingSystemOverview” and “T_EU_OperatingSystemOverview” tasks that can be executed in parallel. The “T_AP_OperatingSystemOverview” task will be performed by a student with the “application programmer” role that has no conflicts with the tasks performed by the “end user” student.
concLusIon In the first e-learning systems, any student, no matter what the student’s personal features
AI Techniques for Monitoring Student Learning Process
Figure 12. The IPSS final solution
were, was presented with the same materials, the same exercises were proposed to him and in the same order. The next generation of e-learning environments are those that are able to adapt the deployed course to the features and actions of the student. In this way, every student will follow an individualised course specifically deployed for the student. We can refer to this kind of adaptation as an individual adaptation which is performed based on the individual characteristics and actions of each student accessing the system. This chapter shows how it is possible to go one step further in the development of e-learning systems and implement a group-based adaptation based on the actions not of an individual student but of a set of students who have accessed the system along a period of time. The basic idea is to register student actions when interacting with an e-learning course based on an initial learning design. A Web-based learning system called TANGOW is used for this purpose. Then, planning and scheduling techniques as implemented in the IPSS system are applied to the data collected
in the TANGOW log files in order to refine the initial learning design by adjusting the duration and the ordering of the activities proposed to the students accessing the e-learning system. For the integration, the TANGOW rules and conditions are translated into IPSS operators, and the TANGOW attributes into IPSS types to produce a plan that corresponds to a course instance (tasks dependency). Consequently, the deployed course adapts itself not only to the personal features and actions of every student who accesses the course, but also to the global actions of a group of students. With this approach the courses generated are automatically validated avoiding inconsistencies in linking activities, durations given to each activity and the total duration of the course, saving time to the educators. Since the planning and scheduling techniques can be applied to the collected log files as many times as required, the process of improving and refining becomes a long-life process that will only stop when the designer or tutor considers it suitable.
AI Techniques for Monitoring Student Learning Process
Finally, we also want to mention the issues that AI planners can gain with this approach. Generally, to specify the domain theory, a deep understanding of the way AI planners work and its terminology is needed. However, if we use a tool like TANGOW, the description language is closer to the user and allows an automatic verification of the syntax through a friendly interface. Also, a new domain to apply and develop new AI algorithms has being created. We are currently on the process of testing the proposed approach. In a first stage we are working with synthetic logs. The next stage will involve empirical tests with real users to verify improvements on the course.
Future reseArch dIrectIons The current state in computer-based education technologies, tools, and standards provides some new interesting perspectives to other research areas like artificial intelligence. The well established standards, such as IMS, SCORM, or LOM, are currently being used to define and develop new adaptive virtual-based education tools. These tools support the creation of personalized learning designs (LD). With these new designs it is possible to reuse and exchange useful information among different platforms. These new tools can be used by educators (and/or learning designers) not only to define the contents of the course (i.e., using the IMS LD specification), but also to create adaptable and personalized learning flows, so that the educational system can monitor and control the whole learning process. This chapter has described a particular authoring tool (TANGOW) that can be used to achieve the previous goal based on teaching rules, and has shown how a particular AI system that integrates planning and scheduling techniques can be used to improve the learning quality of a particular course. In our approach the term quality is used to describe the fact that suggesting one or several modifications,
is a simple way to control and monitor a particular course. In this way, educators can easily detect hidden problems and improve, and reach, their final goals. However, in the near future a new kind of adaptive intelligent education-based tools will be completely designed and developed using this kind of technique (other related AI-based techniques such as machine learning are currently used to learn students profiles and take learning decisions). In these new tools it will be possible to control, monitor, and automatically solve, several detected problems in the learning designs deployed. These problems could be automatically detected (by logging the interactions with the students, and analysing the quantitative results from tests and exams), or provided directly by educators and learning designers, as we have shown previously. These new tools will be able to automatically evaluate, through the long-life learning process of a particular course, the problems and automatically modify the learning designs in order to smooth them. To achieve the previous goal it will be necessary to adapt and integrate, well known Artificial Intelligence techniques, such as Automated planning or scheduling, which allow us to deal with problems like resource assignment or the organisation of different activities (cost, duration, time) in a particular time period. Therefore, it will be necessary to define adequate mechanisms to translate correctly from the e-learning standards into the planning and scheduling standard language representation. In this chapter we have presented an initial approach, based on the adaptive virtual education and authoring tool used.
AcknowLedgment This work has been funded by the following research projects: TSI2006-12085, UAH PI2005/084, PAI-0054-4397, TSI2005-08225C07-06 and TIN2004-03140.
AI Techniques for Monitoring Student Learning Process
reFerences Ahmad, A., Basir, O., & Hassanein, K. (2004). Adaptive user interfaces for intelligent e-learning: Issues and trends. In Proceedings of the 4th International Conference on Electronic Business (ICEB2004) (pp. 925-934). Camacho, D., & R-Moreno, M.D. (2007). Towards and automatic monitoring for higher education learning design. International Journal of Metadata, Semantics, and Ontologies, 2(1), 1-10. Carro, R.M, Ortigosa, A., & Schlichter, J. (2003). A rule-based formalism for describing collaborative adaptive courses, KES2003. Lecture Notes in Artificial Intelligence, 2774, 147-178. Carro, R.M, Pulido, E., & Rodríguez, P. (1999a). Designing adaptive Web-based courses with TANGOW. In G. Cumming, T. Okamoto & L. Gómez (Eds), Advanced research in computers and communications in education (pp. 147-178). Amsterdam: IOS Press. Carro, R.M., Pulido, E., & Rodríguez, P. (1999b). Dynamic generation of adaptive Internet-based courses. Journal of Network and Computer Applications, 22, 249-257. Clark, R.E. (2001). New directions: Evaluating distance education technologies. In R.E. Clark (Ed.), Learning from media: Arguments, analysis, and evidence (pp. 125-136). Greenwich, CT: Information Age Publishing. Farrell, R., Liburd, S.D., & Thomas, J.C. (2004). Dynamic assembly of learning objects. In Proceedings of 13th International World Wide Web Conference, NY. Fikes, R., & Nilsson, N. (1971). Strips: A new approach to the application of theorem proving to problem solving. Artificial Intelligence, 2, 189-208.
Ilghami, O., & Nau, D.S. (2003). A general approach to synthesize problem-specific planners (Tech. Rep. CS-TR-4597). University of Maryland: Department of Computer Science. IMS CP. (2006). Retrieved October 22, 2007, from http://www.imsglobal.org/content/packaging/ IMS LD, IMS Learning Design. (2006). IMS Global Learning Consortium. Retrieved October 22, 2007, from http://www.imsglobal.org/learningdesign/index.html IMSSS, IMS Simple Sequencing. (2006). Retrieved October 22, 2007, from http://www.imsglobal.org/simplesequencing/index.html Kolisch, R., & Hartmann, S. (1999). Heuristic algorithms for solving the resource-constrained project scheduling problem: Classification and computational analysis. Project scheduling: Recent Models, Algorithms and Applications, 147-178. Koper, R., & Olivier, B. (2004). Representing the learning design of units of learning. Educational Technology & Society, 7(3), 97-111. Kozma, R. (1991). Learning with media. Review of Educational Research, 61(2), 179-212. LAMS. (2006). Learning Activity Management System. Retrieved October 22, 2007, from http:// lamsfoundation.org/ LOM. (2006). Retrieved October 22, 2007, from http://ltsc.ieee.org/wg12/ LTSC. (2006). Retrieved October 22, 2007, from http://ieeeltsc.org/ Mazza, R., & Dimitrova, V. (2003, July 20-24). CourseVis: Externalising student information to facilitate instructors in distance learning. In U. Hoppe, F. Verdejo & J. Kay (Eds.), Proceedings of the International conference in Artificial Intelligence in Education, Sydney, Australia.
AI Techniques for Monitoring Student Learning Process
Meyen, E.L., Aust, R., Gauch, J.M., Hinton, H.S., Isaacson, R.E., Smith, S.J., et al. (2002 ). E-learning: A programmatic research construct for the future. Journal of Special Education Technology, 17(3), 37-46. Moodle. (2006). Retrieved October 22, 2007, from http://demo.moodle.com/ M’tir, R.H., Jeribi, I., Rumpler, B., & Ghazala, H.H.B. (2004). Reuse and cooperation in e-learning systems. In Proceedings of the Fifth International Conference on Information Technology Based Higher Education and Training, ITHET (pp. 131-137). Muscettola, N., Dorais, G.A., Fry, C., Levinson, R., & Plaunt, C. (2002). IDEA: Planning at the core of autonomous reactive agents. In Proceedings of the Workshop Online Planning and Scheduling, AIPS 2002 (pp. 49-55). Toulouse, France. Ortigosa, A., & Carro, R. (2003). The continuous empirical evaluation approach: Evaluating adaptive Web-based courses. User modeling. Lecture Notes in Computer Science, 2702, 163-167. Paredes, P., & Rodríguez, P. (2002). Considering sensing-intuitive dimension to exposition-exemplification in adaptive sequencing. In P. De Bra, P. Brusilovsky & R. Conejo (Eds.), Adaptive hypermedia and adaptive Web-based systems. Lecture Notes in Computer Science, 2347, 556-559.
and scheduling integration. IEEE Transactions on Knowledge and Data Engineering, 18(12), 1681-1695. Schmitz, C., Staab, S., Studer, R., Stumme, G., & Tane J. (2002). Accessing distributed learning repositories through a courseware watchdog. In Proceedings of the E-Learn 2002-World Conference on E-learning in Corporate, Government, Healthcare for Higher Education. SCORM. (2006). Sharable Courseware Object Reference Model. Retrieved October 22, 2007, from http://www.academiccolab.org/projects/ scorm.html Sicilia, M.A., Sánchez-Alonso, S., & García-Barriocanal, E. (2006, March 23-25). In Proceedings on Supporting the Process of Learning Design Through Planners. Virtual Campus 2006 PostProceedings, CEUR Workshop Proceedings (vol. 186). Barcelona, Spain. Small, M., & Lohrasbi, A. (2003). Student perspectives on online degrees and courses: An empirical analysis. International Journal on E-learning, 2(2), 15-28. Ullrich, C. (2005). Course generation based on HTN planning. In Proceedings of 13th Annual Workshop of the SIG Adaptivity and User Modeling in Interactive Systems (pp. 74-79).
R-Moreno, M.D. (2003). Representing and planning tasks with time and resources. Ph.D. Thesis, Universidad de Alcalá.
AddItIonAL reAdIng
R-Moreno, M.D., & Camacho, D. (2007). AI techniques for automatic learning design. In Proceedings of the International e-Conference of Computer Science (IeCCS 2006), Lecture Series on Computer and Computational Sciences (LSCCS) (vol. 8, pp. 193-197). VSP/Brill Academic Publishers.
This section provides some additional references related to the main research topics described in this chapter: AI planning and scheduling techniques, virtual education, authoring tools and e-learning standards. We have included both classical texts and some recent publications that could be used by readers to learn more about above-mentioned research themes.
R-Moreno, M.D., Oddi, A., Borrajo, D., & Cesta, A. (2006). IPSS: A hybrid approach to planning
AI Techniques for Monitoring Student Learning Process
ADL, Sharable Object Reference Model, SCOR M. (2006). Retrieved October 21, 2007, from http://www.adlnet.org/index. cfm?fuseaction=Scormabt Albers, P., & Ghallab, M. (1997). Context dependent effects in temporal planning. In Proceedings of the 4th European Conference On Planning, Toulouse, France (pp. 1-12). Allen, J.F., Hendler, B., & Tate, A. (1990). Readings in planning. Morgan Kaufman. Anane, R., Chao, K.-M., Hendley, R.J., & Younas, M. (2003). In Proceedings of the International Conference on Internet and Multimedia Systems and Applications (pp. 104-108). Honolulu. Andriessen, J., & Sandberg, J. (1999). Where is education heading and how about AI? International Journal of Artificial Intelligence in Education, 10, 130-150. Bacchus, F., & Kabanza, F. (2000). Using temporal logics to express search control knowledge for planning. Artificial Intelligence, 16, 123-191. Berlanga, A.J., & García, F.J. (2005). Authoring tools for adaptive learning designs in computerbased education. In Proceedings of the 2005 Latin American conference on Human-computer interaction (pp. 190-201). Blum, A., & Furst, M. (1997). Fast planning through planning graph analysis. Artificial Intelligence 90, 281-300. Blythe, J. (1999). Decision theoretic planning. AI Magazine, 20(2), 37-54. Bonet, B., & Geffner, H. (2001). Planning as heuristic search. Artificial Intelligence, 129(1-2), 5-33. Brusilovsky, P. (1999). Adaptive and intelligent technologies for Web-based education (Special Issue on Intelligent Systems and Teleteaching). Künstliche Intelligenz, 4, 19-25.
0
Brusilovsky, P., & Miller, P. (2001). Course delivery systems for the virtual university. In F.T. Tschang & T. Della Senta (Eds.), Access to knowledge: New information technologies and the emergence of the virtual university (pp. 167-206). Amsterdam: Elsevier Science. Burgos, D., Tattersall, C., & Koper, R. (2006). How to represent adaptation in e-larning with IMS learning design. Retrieved October 22, 2007, from http://dspace.ou.nl/bitstream/1820/786/1/BURGOSetal_SofiaExtensionToILE_v3_210806.pdf Carbonell, J.R. (1970). AI in CAI: An artificialintelligence approach to computer-assisted instruction. IEEE Transactions on Man-Machine Systems, 11(4), 190-202. Castillo, L., Fdez.-Olivares, J., & Gonzalez, A. (2001). On the adequacy of hierarchical planning characteristics for real-world problem solving. In Proceedings of the Sixth European Conference on Planning (ECP’01). Cesta, A., & Oddi, A. (2002). Algorithms for dynamic management of temporal constraint networks (Tech. Rep.). Italian National Research Council. Chang, W.C., Hsu, H.H., Smith, T.K., & Wang, C.C. (2004). Enhancing SCORM metadata for assessment authoring in e-learning. Journal of Computer Assisted Learning, 20(4), 305-316. Clarke, M., & Wing, J.M. (1996). Formal methods: State of the art and future directions. ACM Computing Surveys, 28(4), 626-643. Como, L., & Snow, E.R. (1986). Adapting teaching to individual differences among learners. In M.C. Wittrock, (Ed.), Handbook of research on teaching. New York: McMillan. Cristea, A. (2005). Authoring of adaptive hypermedia. Educational Technology & Society, 8(3), 6-8.
AI Techniques for Monitoring Student Learning Process
Currie, K., & Tate, A. (1991). O-plan: The open planning architecture. Artificial Intelligence, 52, 49-86. De Bra, P., Aroyo, L., & Cristea, A. (2004). Adaptive Web-based educational hypermedia. In M. Levene & A. Poulovassilis (Eds.), Web dynamics, adaptive to change in content, size, topology and use (pp. 387-410). Springer. Dechter, R., Meiri, I., & Pearl, J. (1991). Temporal constraint networks. Artificial Intelligence, 49, 61-95. Drapper, D., Hanks, S., & Weld, D. (1994). A probabilistic model of action for least-commitment planning with information gathering. In Proceedings of the 10th Conference on Uncertainty in Artificial Intelligence (178-186). Morgan Kaufman. Drapper, D., Hanks, S., & Weld, D. (1994). Temporal planning with continuous change. In K. Hammond (Ed.), Proceedings of the 2nd International Conference on AI Planning Systems. University of Chicago, Illinois. AAAI Press. Edelkamp, S., & Helmert, M. (2000). On the implementation of MIPS. In AIPS Workshop on Model-Theoretic Approaches to Planning (pp. 18-25). Emerson, E.A. (1990). Temporal and modal logic. Handbook of Theoretical Computer Science (pp. 997-1072). MIT Press. Ernst, M.D., Millstein, T.D., & Weld, D. (1997). Automatic SAT-compilation of planning problems. In Proceedings IJCAI-97 (pp. 1169-1177). Erol, K. (1995). HTN planning: Formalisation, analysis and implementation. Ph.D. Thesis, Computer Science Department, University of Maryland. Friesen, N., & Anderson, T. (2004). Interaction for lifelong learning. British Journal of Educational Technology, 35(6), 679-687.
Hoffmann, J. (2003). The metric-FF planning system: Translating ignoring delete lists to numerical state variables. Journal of Artificial Intelligence Research, 20, 291-341. IMSCP, IMS Content Packaging. (2006). Retrieved October 22, 2007, from http://www. imsglobal.org IMSQTI, IMS Question and Test Interoperability. (2006). Retrieved October 22, 2007, from http:// www.imsglobal.org Karagiannidis, C., Sampson, D., & Cardinali, F. (2001). Integrating adaptive educational content into different courses and curricula. Educational Technology & Society, 4(3), 37-44. Kautz, H., & Selman, B. (1992). Planning as satisfiability. In Proceedings of the ECAI-92 (pp. 359-363). Koper, R. (2005). An introduction to learning design. In R. Koper & C. Tattersall (Eds.), Learning design. A handbook on modeling and delivering networked education and training (pp. 3-20). The Netherlands: Springer-Verlag. Mason, R. (2004). E-portfolios in lifelong learning. British Journal of Educational Technology, 5(6), 717-727. Mödritscher, F. , García-Barrios, V.M., & Gütl, C. (2004). The past, the present and the future of adaptive e-learning. An approach within the scope of the research project AdeLE. In Proceedings of teh 7th Conference on Interactive Computer-aided Learning (ICL 2004). Carinthia Tech Institute. Muscettola, N. (1994). HSTS: Integrating planning and scheduling. In M. Zweben & M. Fox (Eds.), Intelligent scheduling (pp. 169-212). Morgan Kaufman. Paramythis, A., & Loidl-Reisinger, S. (2004). Adaptive learning environments and e-learning standards. Electronic Journal on E-Learning, 2(1), 181-194.
AI Techniques for Monitoring Student Learning Process
Penberthy, J.S., & Weld, D.S. (1994). Probabilistic planning with information gathering and contingent execution. In Proceedings of the AAAI-94 (pp. 31-36). Seattle, WA. Pryor, L., & Collins, G. (1996). Planning for contingencies: A decision-based approach. In Journal of Artificial Intelligence Research, 4, 287-339. Puterman, M. (1994). Markov decision process: Discrete stochastic dynamic programming. John Wiley and Sons. R-Moreno, M.D., Borrajo, D., Cesta, A., & Oddi, A. (2007). Integrating planning and scheduling in workflow domains. Expert Systems with Applications, 33(2), 389-406. R-Moreno, M.D., Prieto, M., & Meziat, D. (2007). An AI electrical ground support equipment for controlling and testing a space instrument. Applied Artificial Intelligence, 21(2), 81-98. Sicilia, M.A. (2006). Semantic learning designs: Recording assumptions and guidelines. British Journal of Educational Technology, 37(3), 331350. Sicilia, M.A., & Lytras, M. (2005). On the representation of change according to different ontologies of learning. International Journal of Learning and Change, 1(1), 66-79.
Specht, M., & Burgos, D. (2006, June). Implementing adaptive educational methods with IMS learning design. In Proceedings of Adaptive Hypermedia. Dublin, Ireland. Retrieved October 22, 2007, from http://dspace.learningnetworks.org Van Rosmalen, P., Vogten, H., Van Es, R. , Van, P., Poelmans, H.P., & R. Koper (2006). Authoring a full life cycle model in standards-based, adaptive e-learning. Educational Technology & Society, 9, 72-83. Vogten, H., & Martens, H. (2006). CopperCore 3.0. Retrieved October 22, 2007, from http://www. coppercore.org Wasson, B. (1997). Advanced educational technologies: The learning environment. Computers in Human Behavior, 13(4), 571-594. Weld, D. (1999). Recent advances in AI planning. AI Magazine, 20(2), 93-123. Wiley, D.A. (2000). Connecting learning objects to instructional design theory: A definition, a metaphor, and a taxonomy. In D.A. Wiley (Ed.), The instructional use of learning objects. Retrieved October 22, 2007, from http://reusability. org/read/chapters/wiley.doc Wiley, D. (2003). Learning objects: Difficulties and opportunities. Retrieved October 22, 2007, from http://wiley.ed.usu.edu/docs/ lo_do.pdf
Chapter X
Knowledge Discovery from E-Learning Activities Addisson Salazar Universidad Politécnica de Valencia, Spain Luis Vergara Universidad Politécnica de Valencia, Spain
ABstrAct This chapter presents a study applied to the analysis of the utilization of learning Web-based resources in a virtual campus. A huge amount of historical Web log data from e-learning activities, such as e-mail exchange, content consulting, forum participation, and chats is processed using a knowledge discovery approach. Data mining techniques as clustering, decision rules, independent component analysis, and neural networks, are used to search for structures or patterns in the data. The results show the detection of learning styles of the students based on a known educational framework, and useful knowledge of global and specific content on academic performance success and failure. From the discovered knowledge, a set of preliminary academic management strategies to improve the e-learning system is outlined.
IntroductIon This chapter contains a case study on knowledge discovery research carried out on data of graduate and undergraduate courses at the Universidad Politécnica Abierta (UPA) site. This university is a virtual campus at Universidad Politécnica de Valencia and currently it has more than 6,000 students registered in about 230 courses. Figure
1 shows a general schema of the virtual campus learning environment at UPA. The study pursued to obtain knowledge about academic performance success and failure of the students and analyzing the e-learning event activity at the campus Web to recognize patterns on learning styles of the students. Events covered the personal and collaborative use of the Web resources in course activities, including content consulting, e-mail
Copyright © 2008, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
Knowledge Discovery from E-Learning Activities
exchange, forum participation, and so on. The underlying hypothesis was that there is useful hidden knowledge in data from e-learning Web activities for academic management and evaluation of the e-learning system. The chapter describes an integrated methodology to extract knowledge from quantitative and qualitative data; the results obtained its evaluation and a strategic action outline derived from the discovered knowledge. Different data mining techniques were used to exploit the e-learning data through a knowledge discovery approach (Cabena, Hadjnian, Stadler, Verhees, & Zanasi, 1997; Fayyad, Piatetsky-Shapiro, Smyth, & Uthurusamy, 1996; Maimon & Rokach, 2005). Those techniques included, independent component analysis (ICA), neural networks (NN), clustering, linear regression, and decision trees. ICA allowed distinguishing the independence of the events and detecting learning styles; NN was used to obtain patterns of the student behaviour; linear regression was employed for numeric analyses of the relationship between the student performance and the event activity levels. Quantitative clustering and qualitative conceptual clustering algorithms were applied for grouping data in homogeneous datasets. To enable qualitative analysis of the data, continuous numeric data were converted to discrete value data and descriptions for their interpretation were obtained. Finally, on the descriptive datasets, a mining association rule process was made by applying the C4.5 decision tree algorithm. The obtained decision rules involved global and specific content knowledge that was evaluated by academic experts taking into account their validity, novelty, and simplicity. The results were considered as useful for e-learning academic management. Data from the use of the UPA Web facilities included the following Web log statistics about e-learning event activities: course access, agenda using, news reading, content consulting, e-mail exchange, chats, workgroup document,
exercise practice, course achievement, and forum participation. Date and time for each event also were available. Besides of the information on the Web activity, the exercises achieved and grades obtained by the UPA’s students were tried in the knowledge discovery process. The data were collected from the virtual campus Web in the period from January 2002 to March 2005, totalizing 2,391,003 records. The process of knowledge discovery covered the following stages: •
•
•
•
•
•
Building a reliable data warehouse, by filtering data inconsistencies, solving data heterogeneity problems, and processing data. Obtaining and interpreting patterns of the student behaviour in e-learning activities by using independent component analysis, neural networks, and linear regression analysis. Obtaining homogeneous data groups by applying clustering processing and selecting data groups, sorted out by research topics, for the definition of decision rules. Applying a knowledge representation on selected groups using decision trees to obtain the decision rules of the factors that influence on academic achievement success and failure. Evaluating knowledge findings by experts, from the point of view of their validity, novelty, and simplicity. Outlining strategies for the improvement of academic processes.
The following sections describe the background and context of this work, the results obtained in each stage of the knowledge discovery process showing partial findings from the data mining techniques. Final sections include the global and particular conclusions about academic performance and learning styles and future work.
Knowledge Discovery from E-Learning Activities
Figure 1. Virtual campus learning environment at UPA
BAckground Knowledge discovery in databases (KDD), or just knowledge discovery, is a subdiscipline of computer science, which aims at finding interesting regularities, patterns, and concepts in data. Usually knowledge discovery has been related with the global process spans from data to knowledge using different statistical and heuristic techniques called data mining techniques. However in the current literature “knowledge discovery,” “data mining,” and “machine learning” are often used interchangeably. Recently, the data mining approach has been applied in academic research. Those applications include predictive or descriptive modelling on educational data. Traditional sources of data have been databases or questionnaires, and more recently data from the Web. Some of the works in educational predictive models from databases or questionnaires are the following: predicting whether the students graduates in six years (Barker, Trafalis, & Rhoads, 2004), selecting students who would need remedial classes (Ma, Liu, Wong, Yu, & Lee, 2000), and predicting individual student’s final academic achievement by modelling with decision trees and hierarchical models (Gasar, Bohanec, & Ra-
jkovic, 2002). Other predicting works on student’s success, errors, or help requests are: predicting the time spent in solving an exercise task by using neural networks (Beck & Woolf, 1998), and predicting in which word the student asks help in reading English, where information of the student (gender, approximated reading test results of the day, help request behaviour) and word (length, frequency, etc.) were processed (Beck, Jia, Sison, & Mostow, 2003). Regarding to descriptive data modelling techniques applied to educational data, there are several references in subjects such as analyzing factors with affect academic success, desertion and retention of students, mining navigation patterns in log data, analyzing student’s competence in course topics, analyzing student’s errors in program codes, and mining student answers from a Web-based tutoring tool database to get pedagogically relevant information and to provide feedback to the teacher (Kristofic & Bielikova, 2005; Merceron & Yacef, 2003; Romero, Ventura, De Bra, & Castro, 2003; Salazar, Gosalbez, Bosch, Miralles, & Vergara, 2004; Shin & Kim, 1999). Data mining from Web data (Web mining) is a new research area that pursues to understand the information flow at the Web by means of automated techniques for searching knowledge. This
Knowledge Discovery from E-Learning Activities
Table 1. Dimensions of learning and teaching styles underlying in the Web data
1 2 WEB DATA
3 4 5
Sensory Perception 1 Intuitive Visual Input 2 Auditory Inductive Organization 3 Deductive Active Processing 4 Reflective Sequential Understanding Global 1a. Student’s Learning Style Dimensions
area has a wide range of emergent applications including e-learning, e-commerce, automated information assistants, and many applications that operate through the Web (Srivastava, Cooley, Deshpande, & Tan, 2000). One example of Web mining is the classification of Web pages based on understanding the textual content of e-mails based on hierarchical probabilistic clustering (Larsen, Hansen, Szymkowiak, Christiansen, & Kolenda, 2002). Nowadays there exist many topics open in e-learning concerning to the quality evaluation of the system, the knowledge and control of the Web activities of the students, and the use of the huge quantity of outcome information from the e-learning process (Kimber, Pillay, & Richards, 2007; Liaw, Chen, & Huang, 2006; Liu & Yang, 2005; Piramuthu, 2005; Pituchs & Lee, 2006; Reilly, 2005; Selim, 2007; Shee & Wang, 2007; Sun, Tsai, Finger, Chen, & Yeh, 2007). Particularly there is an increasing interest in Web mining of the e-learning data. Some examples are: predicting drop-out on demographic data (sex, age, marital status, etc.) and course data in the first half scores of the course (Kotsiantis, Pierrakeas, & Pintelas, 2003), predicting the course score processing success rate, success at first try, number of attempts, time spent on the problem, and so forth (Minaei, Kashy, Kortemeyer, & Punch, 2003), combining several weak classifiers by boosting to predict final score (Zang & Lin,
Concrete Content Abstract Visual Presentation Verbal Inductive Organization Deductive Active Student Passive Participation Sequential 5 Perspective Global 1b. Professor’s Teaching Style Dimensions
2003). Recently, new holistic Web mining approaches considering extracting learning styles from the Web navigational behaviour of the students have been outlined (Garcia, Amandi, Schiaffino, & Campo, 2007; Mor & Minguillón, 2004; Xenos, 2004). A learning-style model classifies students according to where they fit in a number of scales corresponding to the ways in which they receive and process information. One of the most accepted learning style taxonomies for engineering students is Felder and Silverman (1988) (see Table 1). One learning style is conformed by the combination of one feature in each dimension, for instance, intuitive-visual-deductive-active-global-. This model was used in the present research. Our work pursued two objectives: (1) to find patterns on academic performance of the students, and (2) to detect student learning styles underlying in the Web data. Thus the framework is Web mining for descriptive modelling of the educational data. We contribute with an empirical study with a huge amount of data to contrast the results. The complexity of the social studied phenomenon requires of such kind of analysis as we can found in recent literature (Levy, 2007; Puntambekar, 2006; Schellens & Valcke, 2006; Stephenson, Brown, & Griffin, 2006). In contrast to using one technique as Bayesian networks, we used an integrated approach with several techniques in
Knowledge Discovery from E-Learning Activities
order to exploit and complement the advantages of each one of techniques. This work is developed in a novel research line that pursues to discover student learning styles in order to feedback and improve the e-learning system using no model based methods. The understanding of learning styles is more difficult in e-learning education than traditional education environment, and then research in this topic is always a challenge. Mining patterns from data is a classical task employed for a long time and it is especially useful and suitable in the e-learning context due to the new generating data processes come from Web technological innovations. The algorithms applied search for patterns in the data and its output could be stored in a knowledge based system. However the scope of the chapter does not comprise the stage of knowledge storage, so metadata are not included for the output of the algorithms; although from the decision rules obtained and using, for instance, logic programming languages, the creation of a knowledge base can be easily undertaken. In addition, the set of decision rules, cluster descriptions, and learning styles detected in this work implicitly define a kind of informatics ontology that could be used to share or reuse the discovered knowledge, using an implementation format and language. There are several references comparing informatics ontologies and statistical approaches as the included in this chapter (Caragea, Pathak, & Honavar, 2004; Gomez-Perez & Manzano-Macho,
2004; Pils, Roussaki, & Strimpakou, 2006). Some of the definitions of informatics ontology are the following: the informatics ontologies define the kind of things that exists in the application domain, allowing no confusion of the terms and symbols (Sowa, 2000), informatics ontologies define a kind of explicit specification of a concept set (Gruber, 1995), informatics ontologies are defined as formal specification of a shared concept set (Borst, 1997). The principal motivation of the informatics ontologies is allowing sharing and reusing of knowledge bases computationally, by using a common vocabulary. Informatics ontology can be defined in several ways, but necessarily it includes a vocabulary of terms and some specification of their meanings. This includes definitions and specifications on relationship between terms, so, in general, a structure is imposed on the domain and term interpretation is constrained (Fridman & McGuinness D., 2001; Uschold, King, Morales, & Zorgios, 1998; Weigand, 1997). Thus, informatics ontology can be a logic theory, a semantic formal description, the vocabulary of a logic theory, and a specification on conceptions.
dAtA PreProcessIng A reliable data warehouse was created from the historical (2001-2005) log Web data from the UPA. For this end, different operations were applied to
Figure 2. Structure of the Web data
Knowledge Discovery from E-Learning Activities
the original data: filtering of missing and erroneous data, solving data heterogeneity problems due to different data sources, adaptation of variables for data processing, and data retrieval for stratified analysis. Figure 2 shows a simplified scheme of the data entities at the UPA. The objective data were collected from the Web activity of the virtual campus in the tables: grades and event. The event table contained one record for each event at the virtual campus Web site. The fields of this table were: group, course, student code, event time and event class; see event classes (e-learning activities) in Tables 1 to 10. The total number of events in the analyzed period was 2,391,003 of which 2,124,734 corresponded to student events and the rest of events were of teachers and system administrator. From the student events, only the records corresponding to courses with more than 3 students (2120547 records) were selected for the knowledge discovery study. A new event table was calculated summarizing records for student and kind of event. The resulting table contained the fields: group, course, student course, event class counter (event instance total for that event class), and average event class time (average time of the student activity in that event class). It was a 63,207 records table. In order to build a data warehouse for data mining, a new table with the projection of the event table on the student code was calculated. It was an 8,909 records table. That number of records is due to that active and no active student data were contained in the initial event table. For each student, the corresponding total instance counter of each kind of event was calculated, and a normalized value (1-100 scale) of student event activity was calculated with the following equation,
(1) even_activitystudent =
event total instancestudent ⋅ 100 instance maximumevent
The student activity data were added as fields to the data warehouse. From the grades table, the average grade for a student was calculated and added to the data warehouse. Because all the virtual courses do not have evaluation, only 1,873 of the rows of the data warehouse had a value for the variable average grade. To get the qualitative descriptions of the student event activity, Tables 2 to 11 were defined.
Table 2. Course access description Type
Course access
1
Almost never
2
Occasionally
3
Usually
4
Very frequently
Table 3. Agenda using description Type
Agenda using
1
Does not use it or use it a little
2
Average use
3
Use it a lot
Table 4. News reading description Type
News reading
1
Do not read it or almost never read it
2
Average use it
Table 5. Content consulting description Type
Content consulting
1
Almost never
2
Occasionally
3
Usually
4
Very frequently
Knowledge Discovery from E-Learning Activities
Table 6. E-mail exchange description Type
The global mean and limits of the event activity were calculated using the following equations:
(2)
E-mail exchange
1
Sporadically exchange it
2
Usually exchange it
3
Copiously exchange it
meanevent =
total instance numberevent student number (3)
Table 7. Chat description Type
Chats
1
Not very active
2
Fairly active
3
Very active
Table 8. Workgroup documents Type
Workgroup documents
1
Low collaboration
2
Average collaboration
3
High collaboration
Table 9. Exercise practice description Type
Exercise practice
1
Few exercising
2
Enough exercising
3
Much exercising
Table 10. Course achievement description Type
Course achievement
1
Sporadically
2
Usually
3
Very frequently
Table 11. Forum participation description Type
Forum participation
1
Little
2
Average
3
High
limitevent = meanevent ± (maximum - minimum)event � 0.3 Equation 3 was applied after checking the normality of the event instance distributions. The superior limit (suplim) for event instance was calculated using plus in equation 3 and inferior limit (inflim) was calculated using minus in equation 3. Thus the 60% of the probability density distribution of the event instance was contained between the superior and inferior limits. Given the event total instance numbers (event activity) for a student, the corresponding description values were calculated using the description tables and the event limits. To allocate a value of the description table from an event activity value for a student, the following algorithm for 4-entries description tables was used: If event_activity < inflimevent allocate to Type 1 If event_activity >= inflimevent & < meanevent allocate to Type 2 If event_activity >= meanevent & < suplimevent allocate to Type 3 If event_activity >= suplimevent allocate to Type 4 In the case of 2-entries tables, Type 1 was allocated when event_activity < meanevent and Type 2 was allocated when event_activity >= meanevent. For tables with 3 entries, Type 1 was allocated when event_activity < inflimevent, Type 2 was allocated when event_activity >= inflimevent & < suplimevent, and Type 3 was allocated when event_activity >= suplimevent.
Knowledge Discovery from E-Learning Activities
Besides of assigning description values to event activity values, the instance time of the events was discretized using the following hour intervals: [0-6) =dawn, [6-12) =morning, [12-18) =evening, and [18-24) =night. The qualitative value for average grade was calculated using the following discretization intervals: [0-5) =unsatisfactory, [5-7] =fair, [7-9) =good and [9-10] =excellent. At the end of the data preprocessing stage, the data warehouse obtained consisted of a 8,909 (student records) x 27 (variables) as follows: 3 identification record variables (group, course, student code), 4 variables for quantitative and qualitative values of average grade and average time, and 20 variables for quantitative and qualitative values of the activity for the different kind of e-learning events.
dAtA mInIng scheme Figure 3 shows a general schema of the relationship between the data mining techniques applied on the data warehouse. Quantitative clustering was made by applying the fuzzy c-means algorithm (Bezdek & Pal, 1992) and qualitative clustering
using the conjunctive conceptual algorithm (Michalsky & Stepp, 1983). Total population of the data warehouse was divided accordingly to the course types in: graduate (informal courses), doctorate, and regular academic career courses. In addition, each of those population divisions was divided in two subsets: cases with grades and cases with nogrades. Therefore six disjunctive data subsets to analyze were generated.
IndePendent comPonent AnALysIs (IcA) ICA is a powerful statistical technique that has had a successful application in different areas of signal processing (Cichocki & Amari, 2001; Hyvärinen, Karhunen, & Oja, 2001). ICA assumes that there is a M-dimensional zero-mean vector s(t) = [s1(t),...,sM (t)]T, such that the components si(t) are mutually independent. The vector s(t) corresponds to M independent scalar-valued source signals si(t). The multivariate probability density function (p.d.f.) of the vector can be rewritten as the product of marginal independent distributions
Figure 3. Interconnection of the applied data mining techniques
0
Knowledge Discovery from E-Learning Activities
M
p ( s ) = ∏ p i ( si ). A data vector x(t) = [x1(t)...xN (t)]T i =1
is observed at each time point t, such that x(t) = As(t)where A is called mixture matrix and it is full rank N x M, (Hyvärinen, Karhunen, & Oja, 2001). There are several standard ICA algorithms as FastICA (Hyvärinen & Oja, 1998), Extended Infomax (Lee, Girolami, & Sejnowski, 1999), or TDSEP (Ziehe & Müller, 1998). Those algorithms rely on assumptions about the source signals, such that imply a given model for the source distributions or make assumptions that are only fitted to specific applications. We applied standard ICA algorithms and a new nonparametric ICA algorithm proposed in Annex 1. This latter algorithm yields the best results, because it was more adaptable to the data. It does not assume any restriction on the data, since the probability distributions are calculated directly from the training set through a nonparametric approach, and also focusing the independency between the source components directly from its definition based on the marginal distributions.
ICA was applied on the UPA data in order to identify independent “sources” (independent event activity), that is, searching those event activity that can separate by an ICA algorithm as a source. Figure 4 shows the estimated activity for the 10 events (see Tables 2-11) on the UPA Web plus the average connection time and average grade for 1,072 students of graduate courses with grades. Note that data are displayed as signals (vectors of samples) for Figures 4 and 5. This latter show the sources estimated by an ICA algorithm, note that signal of event 8 (exercise practice) in Figure 4 is very similar (high correlated) to source 5 in Figure 5, it means that the activity corresponds to the workgroup document event could be recognized as an independent source for this subset of data. After analyzing the results from ICA applied to the different data subsets and considering additional information about the courses and students in the campus, we can infer the following conclusions:
Figure 4. Data of the graduate courses with grades for the 10 events, and average grades and connection time: e1 (course access) … e10 (forum participation)
Knowledge Discovery from E-Learning Activities
Figure 5. Sources calculated by an ICA algorithm for data of Figure 4
•
•
•
•
E-mail exchange was independent in some cases. It could be due to weakness in teaching strategies for promoting the student interactivity. Then e-mail exchange is transformed in e-mail review done as a routine. In courses with no grades, the workgroup document event was independent. The lack of evaluation and grades discourage the participation of students in collaborative tasks. In some datasets the content consulting event was independent as reflect of a kind of distributed passive learning (DPL) nature of the Web platform. Thus content consulting becomes a routine consisting in download materials with no interactive learning process. Exercise practice and course achievement also were found as independent events for some datasets. It could be due to the profile of some students that includes information and telecommunications background and knowledge about course contents. For those students participating in those event activities could be irrelevant.
PrIncIPAL comPonent AnALysIs (PcA) And IcA PCA is a very well known technique that reduces the variable dimensionality in statistical multivariate analysis (Hardle & Simar, 2006). We applied PCA for grouping the events of the Web activity in learning dimensions taking into account the Felder’s framework (Felder & Silverman, 1988). PCA reduced 10 Web event activities to 5 components. To solve the problem of detecting learning styles in e-learning we assume that the underlying independent sources that generate the Web log data are dimensions of the learning styles of the students and we observe x linear combinations of those styles through the use of the facilities by the students at the virtual campus. Then, si , (i = 1, ,5 learning style dimension) cor respond to the “perception,” “input,” “organization,” “processing,” and “understanding” dimensions (see Table 1); and the mixture matrix A provides the relation between e-learning style dimensions and e-learning event activities, aij, (i = 1,...,5 learning style dimension), ( j = 1,...10 e-learning activity).
Knowledge Discovery from E-Learning Activities
Table 12 contains the six first sorted contributions of Web activities of the ICA mixture matrix for the 5 sources estimated. Each source was associated with one learning dimension of Table 1 analyzing the weight of the Web activities and considering the principal evaluation methodologies employed by teachers for graduate courses with grades. Dimension 1 was not detected and dimension 5 was detected twice. The methodologies assigned grades focusing on: achievement, individual student participation, or group work. The implicit teaching styles of the evaluation methodologies encourage specific learning styles of the students, as we explain below. The learning dimension 1 (sensory-intuitive) corresponding to “perception” was not detected in the ICA mixing matrix; it could be because the
emphasis of educational strategies did not favour to highlight that dimension. From Table 12, the relationship between learning style dimensions and Web activities can be made; see Table 13 where we have added a possible Web activity combination for learning dimension 1. Note that some Web activities are associated with more than one dimension; it has sense because a Web activity could demand several capabilities of the students used in their learning process. Allowing that kind of relationship we can obtain more real and versatile descriptions of the student learning styles, besides of including all the dimensions of the learning framework. In Garcia et al., (2007) just three dimensions of the Felder and Silverman (1988) model were considered and the Bayesian network proposed constrained
Table 12. ICA mixing matrix (*learning style dimension, **workgroup documents) LSD*
Sorted Web activity contribution chat
forum
news
1
.82283
.30755
e-mail
access
exercises
.16476
.14756
.14231
2 **
e-mail
content
wg-doc
exercises
forum
chat
1
.34189
.32297
.28768
.22548
.20078
wg-doc
news
achieve
content
chat
e-mail
1
.80531
.4122
.39987
.39421
.31666
achieve
content
agenda
access
forum
news
1
.45124
.2117
.21116
.20087
.18239
access
agenda
content
achieve
e-mail
chat
1
.95776
.85549
.7143
.5832
.49774
4
3
5’
5
Table 13. Association between learning styles and Web activities Learning Style
Web event activity
1
Sensory-Intuitive
Perception
chats, forum participation, course access.
2
Visual-Auditory
Input
chats, forum participation, news reading, e-mail exchange.
3
Inductive-Deductive
Organization
workgroup document, news reading, course achievement, content consulting .
4
Active-Reflective
Processing
e-mail exchange, content consulting, workgroup document, exercise practice.
5
Sequential-Global
Understanding
course access, agenda using, content consulting, course achievement.
Knowledge Discovery from E-Learning Activities
relationship of the Web activities with just one dimension of the learning model. Figure 6 shows the sources 3, 4, and 5 (organization, processing, understanding) obtained for the grade graduate course dataset. Four labelled characterised zones in the learning style space are displayed: (1) Represents the learning style more important in the population. The learning for the students in this zone emphasizes global understanding, active processing, and deductive logic (natural human teaching style), and high grades. (2) This learning style is focused on inductive logic (natural human learning style), with sequential understanding, and relative active processing. Students within this style could have natural skills for virtual education. (3) It is characterised by global understanding, deductive logic, and reflective processing. Students within this style would have higher abstraction skills that need of teaching. (4) Basically this cluster represents outliers with individual learning styles. We can conclude that dimension of understanding enables to project clearly the learning styles, and its principal components are achievement, content, and agenda. This finding confirms the assumption that the more quickly way to change the learning style of the student is to change the assessment style, that is, expected evaluation bias how the student learns (Elton & Laurillard, 1979).
We made a cluster validation procedure to determine best quality of cluster configuration for data of Figure 6. It consisted in estimating the partition coefficient and the partition entropy coefficient for different number of clusters (Haldiki, Batistakis, & Vazirgiannis, 2001). The best cluster configuration for data of Figure 6 was 4 clusters—a detailed explanation of cluster validation procedure is in cluster analysis section. Figure 7 shows three sources for graduate courses with no grades. The distribution of the data in Figure 7 does not allow forming learning style groups and show all the subjects within a unique learning style. As understanding and organization dimensions do not discriminate projection of the learning styles, only the dimension of the processing provides some discrimination. The unique learning style emphasises reflection over actuations. It would be the content consulting and exercise practice components of that dimension. The conclusion is the lack of assessment does not allow developing student learning styles. Results for regular academic career courses were similar to the graduate courses results finding meaningful learning styles for courses with grades. Results of this section could be analyzed as a kind of ontology. The conceptions are the learning styles detected, related with the dimensions
Figure 6. Three sources in a learning style space for graduate courses with grades
Knowledge Discovery from E-Learning Activities
Figure 7. Three sources in a learning style space for graduate courses with no grades
Figure 8. Ontology of learning styles detected
of learning (input, organization, processing, understanding), and ultimately with the Web learning activities. Figure 8 shows such a kind of ontology (numbers in the boxes corresponding to the numbers of learning styles in Table 1).
regressIon AnALysIs And neurAL networks A linear regression model was designed with the average grade as dependent variable and the 10 Web event activities plus the average connection time as independent variables. The adjustment of
the linear model were not statistical significant, so there is some nonlinear relations between the variables. We tried with more adaptive models applying the linear vector quantization (LVQ) neural networks to classify the different datasets in for classes depending on the average grade: [0-5) =unsatisfactory, [5-7] =fair, [7-9) =good and [910] =excellent. LVQ algorithm includes the selforganizing and competitive stages; four output neurons were defined corresponding each to the target classes. Seventy-five percent of the data were used in the training phase and the rest of the data in testing phase. Kohonen learning rate of 0.01 and
Knowledge Discovery from E-Learning Activities
conscience learning rate of 0.001 were employed. Different tests were made varying the number of neurons of the hidden layer from 4 to 10. The groups found automatically by LVQ could be interpreted with similar contents to the obtained by the fuzzy clustering algorithm described in next section. However some dissimilar groups were found by LVQ, for example, students that only have a lot of chats and obtain an excellent average grade, students that exchange e-mails and have chats with excellent average grades.
cLusterIng AnALysIs In the clustering procedure, a (specified) number of clusters are calculated from a set of objects. A cluster is represented by a cluster center which defines the center point of the cluster in the feature space. A cluster center is thus an (imaginary) object which defines the typical or ideal representative of its cluster.
Figure 9. Cluster structure
Figure 9 shows a scheme of a data cluster structure, consisting of five clusters and the distances between cluster centroids. Doughnut sizes represent the number of records in each cluster and doughnut slides represent the 12 variable values of cluster centroids. The fuzzy c-means was applied using a fuzziness degree of 1.3 (exponent m). Validity measure was the partition coefficient and the maximal number of classes used in training was 16. The calculation was carried out for all classes and the best number of classes was determined and validated by checking the evolution through the class number range of the partition coefficient (pc) vs. the classification entropy (pe), see Figure 10. Figure 10 was calculated with graduate with no-grades data subset and the best portioning is at c=5. The partition coefficient and the partition entropy both tend towards monotone behaviour depending on the number of clusters. So as to find the “best” number of clusters c* one chooses the number where the entropy value c* lies below the rising trend and the value for the partition coef-
Knowledge Discovery from E-Learning Activities
Figure 10. Cluster validation
ficient lies above the falling trend. On viewing the curve of all the connected values, this point can be identified as a kink (thus the name “elbow criterion”). The best partitioning of the clusters applies at that point with a value of c to get the highest cluster differentiation (maxima of inter-clusters mean distances) with good homogeneity within cluster members (minima of distances between cases and centroids) (Haldiki et al., 2001). The application of the fuzzy c-means algorithm on the defined 6 data subsets generated 23 clusters or groups. Ten groups for the data subsets graduate and career regular courses with grades, and thirteen groups for the same courses with no-grades. For the doctorate courses no groups were generated due to there were not enough cases to analyze. Besides of the fuzzy c-means, the conjunctive conceptual algorithm was used on the qualitative variables to get logical conjunctions of relations between the variables. From the analysis of the obtained cluster centroids, the following conclusions on academic performance were derived.
For the regular career courses with grades: •
•
•
•
•
•
The student group with the best grades shows a similar activity level in the different event types, except in course achievement where it has a higher activity than the other groups. The student group with the worst grades shows a higher exercise practice proportion than the other groups; however the course achievement is relatively low. The intermediate academic performance groups show an imbalance in the proportion of activities, focusing in e-mail exchange and agenda using. This data subset does not use events that require interactivity (chats, forum participation, and workgroup documents) among several students. The students with worst grades are devoted mainly to news reading, content consulting, and e-mail exchange but they do not undertake to course achievements. The average grade of the clusters follows a normal distribution, being the clusters with the best and worst grades less numerous than the other clusters.
Knowledge Discovery from E-Learning Activities
For the graduate courses with grades: • • •
•
•
•
There are no significant differences in the academic performance of the clusters. The worst grade group is the highest course achievement one. The best grade group shows similar behaviour in each of the e-learning activities. This group has the highest value of course access activity. In this data subset, the events that require interactivity among students were used, but its activity was lower than the events that do not require student interactivity. The best grade group was the second most numerous ones and the worst grade group was the less numerous ones. The best grade groups used the e-mail more frequently than the others.
For the regular career courses with nogrades: •
Every group exhibited a good utilization of the interactivity events forum and chats but workgroup document event.
•
Every group showed similar proportion in exercise practice and course achievement events. For the graduate courses with grades:
•
•
Every group showed a high utilization of interactivity events; even the workgroup documents activity was high. There was a group with high values for the three interactivity activities, but the value for exercise practice event is very low like in the other groups.
Clusters calculated from the graduate course with no-grades data subset are shown in Figure 11. Event activity values in the cluster representation are normalized.
mInIng decIsIon ruLes Average grade was defined as outcome variable for mining decision rules on academic performance; 250 decision rules were obtained applying the C4.5 algorithm (Quinlan, 1992) to the clusters of the
Figure 11. Event activity proportion at calculated clusters from graduate course data subset
Knowledge Discovery from E-Learning Activities
graduate and regular career course data subsets and to the group consisting of the union of both data subsets. Decision rules involve global content or specific content knowledge. Some of the mined decision rules are listed below including the success percentage of the rules. For graduate courses:
Rule No 114: If Course = Economic and Financial System and Exercise practice = Much exercising and Course achievement = Usually Then Average Grade FAIR [83.54%]
Rule No 9: If Agenda using = Average use and News reading = Does not use it or use it a little and Content consulting = Usually and Forum participation = Average Then Average Grade GOOD [80.65%]
Rule No 115: If Course = Economic and Financial System and Average time = Evening and E-mail exchange = Usually exchange it and Course achievement = Very frequently Then Average Grade FAIR [73%]
Rule No 16: If Content consulting = Very frequently and Course achievement = Usually and Forum participation = Average Then Average Grade GOOD [77.24%]
Rule No 116: If Course = Economic and Financial System and Average time = Evening and News reading = Average use it and Content consulting = Usually and Chats = Fairly active Then Average Grade FAIR [81.5%]
Rule No 36: If Course = Environment System Management and Average time = Evening and E-mail exchange = Copiously exchange it and Chats = Very Active Then Average Grade EXCELLENT [71.23%] Rule No 55: If Course = Renewable Energy and Chats = Fairly active Then Average Grade EXCELLENT [71.45%] Rule No 67: If Course = Teledetection Systems for Environment Risk Prevention and Course access = Very frequently and Content consulting = Occasionally Then Average Grade FAIR [80%] Rule No 78: If Course = Basic Environment Technical English and Course achievement = Null Then Average Grade UNSATISFACTORY [80%]
For the regular career courses:
Rule No 121: If Course = Economic and Financial System and Average time = Morning and Agenda using = Average use and Content consulting = Usually Then Average Grade FAIR [79.57%] Rule No 133: If Course = Management I and News reading = Average use it and Exercise practice = Enough exercising Then Average Grade GOOD [94.85%] Rule No 140: If Course = Economic and Financial System and Average time = Evening and E-mail exchange = Usually exchange it and Chats = Very active Then Exercise practice = Enough exercising Then Average Grade GOOD [95.27%]
Knowledge Discovery from E-Learning Activities
•
Rule No 141: If Average time = Evening and Chats = Null and Exercise practice = Enough exercising and Forum participation = High Then Average Grade GOOD [84.24%]
•
Rule No 154: If Course = Management I and Exercise practice = Much exercising and Course achievement = Usually and Average Grade EXCELLENT [90%]
•
The knowledge findings obtained in the research were evaluated by academic administration experts of the university in aspects such as validity, novelty, and simplicity, obtaining general score of 8.2 points on a scale from 1 to 10. From the point of view of informatics ontologies, the trees conformed by decision rules could be analyzed as “concepts” about good and bad academic achievement of the students. Those rules provide structure for relationship of the terms that define the student behaviour using the Web e-learning activities. In (Silvescu, Reinoso-Castillo, & Honavar, 2001) is explained how implicit ontologies drive the information extraction and data integration procedures used in knowledge acquisition from data, specifically using decision trees.
•
•
•
To collaborate with other educational networks or virtual platforms in order to reinforce teaching quality and promoting the creation of virtual learning networks. To design and to implement pedagogic course syllabus for students to understand e-learning education. The appropriate utilization of information and communications technologies helps to educate more and better. To adapt the roles of counsellors, teaching, supporting, and administrative staffs to the classes in the cyberspace. To promote the knowledge of the virtual campus in all the university to generate synergic relationship between university people. This can produce positive feedback to the virtual campus as new students and teachers, e-learning project creation, and communications for improvements. To propose special events for diffusion of the virtual university as conferences and workshops online, in order to obtain participation of the students. To implement a virtual library with references to bibliographic contents of virtual courses. Thus in addition to the basic modules and annexes of the courses, access to bibliographical electronic resources, allowing research activities, would be provided.
strAtegIc ActIon outLIne concLusIon The following are some preliminary strategies for academic management that were proposed based on the results of the knowledge discovery process. •
0
To empower collaborative informatics to include practical virtual labs. Some of the technical subjects to be included are: workgroup, workflow, data mining, searchers, multimedia, and customer research management.
The proposed methodology applied to a real case with huge historical data obtained promise results in detecting student learning styles in an e-learning environment. The dimensions of the Felder’s learning framework were modelled using and adaptive approach. The versatility of the approach consists in integrated descriptive modelling using several data mining techniques for processing quantitative and qualitative data. Nonparametric independent component analysis, a technique normally used in signal processing,
Knowledge Discovery from E-Learning Activities
has been useful for detecting patterns in e-learning data. Despite the possible problems of converting continous numeric data to discrete value data, improvement of interpretation capabilities has been demonstrated. Modelling learning dimensions as a combination of Web event activities enhanced the detection of the student learning styles. The knowledge discovery from e-learning Web data found useful knowledge (of global or particular content) on academic performance of the students at the Universidad Politécnica Abierta (UPA). Among the findings are the following: (i) Events of synchronous interactivity, such as chats, forum participation, and events of asynchronous interactivity empower the student academic performance; (ii) In the courses with grades, academic student performance could be improved by motivating students to have course achievement. Some students show good values for the different event activities, including exercise practice, but do not have evaluations. General results of the research were well evaluated by academic experts taking into account the validity, novelty, and simplicity of the knowledge. All these knowledge of global and particular contents could be used to improve the e-learning system in different aspects. Strategies to encourage interactivity between students, strategies to design an assessment methodology that reinforce the student learning styles detected, and global improvements of different components of the e-learning system towards a more distributed interactive learning could be proposed. Considering the findings of knowledge, a preliminary set of strategies was outlined.
with more variables. Thus, the complexity of the analysis of the research topics can be more realistically modelled. Among these variables could be gender, age, location, enrolment date, likes, and dislikes, and so on. Besides of teacher’s data, such as the course’s survey results and research topics. Those variables would be collected through questionnaires, or transferring automatically from databases. The part of teaching styles of the Felder’s learning framework or another educational model has to be incorporated in the proposed methodology. The tuning of the learning and teaching styles to obtain a good performance in the outcome of the process would be modelled. Depending on the mixture of learning and teaching styles, several adaptations of the pedagogical e-learning resources could be made. The results of the enhanced model could be used to adapt teaching methodologies, including the critical aspect of the assessment style, or in general to improve the e-learning system, balancing distributed passive learning (DPL) and distributed interactive learning (DIL). The semantic information and the implicit informatics ontologies defined by clusters descriptions, decision trees, and learning styles conceptions found in the research, could be used to implement a knowledge-based system and/or a standard ontology of the studied domain. The ontology could be used to exchange and reuse the knowledge and it would make easy to increase, foster, and update the knowledge obtained from the Web e-learning activities. The use of a Web ontology language and standard data interchange formats would make possible the approach to the semantic Web.
Future work Future reseArch dIrectIons As a prototype, the study has yielded encouraging results on the application of knowledge discovery to e-learning analysis. Nevertheless, in order to obtain a complete application of this analysis is necessary to complement the data warehouse
The chapter has discussed the knowledge discovery in e-learning considering several subjects as: e-learning Web activities, data preprocessing, data mining techniques, knowledge evaluation, learn-
Knowledge Discovery from E-Learning Activities
ing and teaching styles, pedagogical innovation, and informatics ontologies. The balance between interactive and personal activities is a critical factor for e-learning systems (distributed passive learning (DPL) and distributed interactive learning (DIL)). An interesting area of research is the proposal of new e-learning activities or determining the suitable mixture of those activities considering, for instance, contents, multimedia resources, and ubiquitous networks. The quality of the discovered knowledge is directly proportional to the cleanness and relevance of data. E-learning processes could generate a lot of useless information, so efficient algorithms to filtering and summarizing data; to resolve inconsistencies, to estimate missing data, and solving data heterogeneity are valuable for the knowledge discovery approach. Pattern recognition is a wide area that includes many kinds of machine learning algorithms. Independent component analysis (ICA) algorithms, as applied in the present chapter, have yielded important results in areas as image filtering and segmentation, brain to computer interface, and electrocardiographic diagnosis. Recently the mixture of ICAs has emerged as a flexible generating model to arbitrary data densities using mixtures of Gaussians or Laplacians distributions or nonparametric distributions for the components. Those ICA mixtures could be used to model data or knowledge in the Web. Usually the evaluation of the knowledge is made by experts. Nowadays, aspects as novelty or interestingness are estimated by novelty detection algorithms. Those algorithms could be used by intelligent agents in the Web in order to make decisions considering user behaviours. In the field of e-learning it is a novel approach. Recently, second level patterns in data mining have been studied. Those approaches have been used in protein and ADN research, where the results of a first level of data mining conforms a huge knowledge domain. In Web applications that kind of methods would be useful. In addition the
automatic conversion of knowledge or semantic information obtained by Web mining techniques, represented by structures as the decision trees, to ontologies would make possible the exchange and reuse of the domain knowledge. Methodologies to create hierarchical structures of patterns are suitable to create ontologies; to make that possible, conversion procedures to translate statistical information to standard data interchange formats are needed. All of those approaches may contribute to develop the semantic Web.
AcknowLedgment Special thanks go to the Universidad Politécnica Abierta personnel for giving the Web data and information about the virtual campus. This work has been supported by Spanish Administration under grant TEC 2005-01820.
reFerences Barker, K., Trafalis, T., & Rhoads, T.R. (2004). Learning from student model. In System and Information Engineering Design Symposium (pp. 79-86). Beck, J.E., Jia, P., Sison, J., & Mostow, J. (2003). Predicting student help-request behavior in an intelligent tutor for reading. In 9th International Conference on User Modelling (pp. 303-312). Beck, J.E., & Woolf, B.P. (1998). Using a learning agent with a student model. Lecture Notes in Computer Science, 1452, 6-15. Bezdek, J.C., & Pal, S.K (1992). Fuzzy models for pattern recognition: Methods that search for structures in data. New York: IEEE Press. Borst, W.N. (1997). Construction of engineering ontologies for knowledge sharing and reuse. University of Twenty, NL-Centre for Telemática and Information Technology.
Knowledge Discovery from E-Learning Activities
Cabena, P., Hadjnian, P., Stadler, R., Verhees, J., & Zanasi, A. (1997). Discovering data mining: From concept to implementation (IBM Books). Pearson Education. Caragea, D., Pathak, J., & Honavar, V. (2004). Learning classifiers from semantically heterogeneous data. Lecture Notes in Computer Science, 3291, 963-980. Cichocki, A., & Amari, S. (2001). Adaptive blind signal and image processing: Learning algorithms and applications. New York: John Wiley & Sons. Duda, R., Hart, P.E., & Stork, D.G. (2000). Pattern classification (2nd ed.). Wiley-Interscience. Elton, L.R.B., & Laurillard, D.M. (1979). Trends in research on student learning. Studies in Higher Education, 4(1), 87-102. Fayyad, U., Piatetsky-Shapiro, G., Smyth, P., & Uthurusamy, R. (1996). Advances in knowledge discovery and data mining. New York: The MIT Press. Felder, R., & Silverman, L. (1988). Learning and teaching styles. Journal of Engineering Education, 78(7), 674-681. Fridman, N., & McGuinness, D. (2001). Ontology development: A guide to creating your first ontology (Rep. No. KSL-01-05, SMI-2001). Garcia, P., Amandi, A., Schiaffino, S., & Campo, M. (2007). Evaluting Bayesian networks’ precision for detecting students’ learning styles. Computers & Education, 49(3), 794-808. Gasar, S., Bohanec, M., & Rajkovic, V. (2002). Combined data mining and decision support approach to the prediction of academic achievement. In Workshop on Integrating Aspects of Data Mining (pp. 41-52). Gomez-Perez, A., & Manzano-Macho, D. (2004). An overview of methods and tools for ontology
learning from texts. Knowledge Engineering Review, 19(3), 187-212. Gruber, T.R. (1995). Towards principles for the design of ontologies used for knowledge sharing. International Journal of Human-Computer Studies, 43, 907-928. Haldiki, M., Batistakis, Y., & Vazirgiannis, M. (2001). On clustering validation techniques. Journal of Intelligent Information Systems, 17(23), 107-145. Hardle, W., & Simar, L. (2006). Applied multivariante statical analysis. New York: Springer. Hyvärinen, A., Karhunen, J., & Oja, E. (2001). Independent Component Analysis. New York: John Wiley & Sons. Hyvärinen, A., & Oja, E. (1998). A fast fixed-point algorithm for independent component analysis. Neural Computation, 9(7), 1483-1492. Kimber, K., Pillay, H., & Richards, C. (2007). Technoliteracy and learning: An analysis of the quality of knowledge in electronic representations of understanding. Computers & Education, 48(1), 59-79. Kotsiantis, S.B., Pierrakeas, C.J., & Pintelas, P.E. (2003). Preventing student dropout in distance learning using machine learning techniques. In Proceedings of 7th International Conference on Knowledge-Base Intelligent Information an Engineering Systems. Kristofic, A., & Bielikova, M. (2005). Improving adaptation in Web-based educational hypermedia by means of knowledge discovery. In ACM Conference on Hypertext and Hypermedia (pp. 184-192). Larsen, J., Hansen, L.K., Szymkowiak, A., Christiansen, T., & Kolenda, T. (2002). Web mining: Learning from the world wide Web (Special Issue of Computational Statistics and Data Analysis). Computational Statistics and Data Analysis, 38, 517-532.
Knowledge Discovery from E-Learning Activities
Lee, T., Girolami, M., & Sejnowski, T. (1999). Independent component analysis using an extend infomax algorithm for mixed sub-Gaussian and super-Gaussian sources. Neural Computation, 11, 417-441. Levy, Y. (2007). Comparing dropouts and persistence in e-learning courses. Computers & Education, 48(2), 185-204. Liaw, S., Chen, G., & Huang, H. (in press). Users’ attitudes toward Web-based collaborative learning systems for knowledge management. Computers & Education. Liu, H., & Yang, M. (2005). QoL guaranteed adaptation and personalization in e-learning systems. IEEE Transactions on Education, 48(4), 676-687. Ma, Y., Liu, B., Wong, C.K., Yu, P.S., & Lee, S.M. (2000). Targeting the right students using data mining. In KDD’00: Sixth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (pp. 457-464). Maimon, O., & Rokach, L. (2005). Data mining and knowledge discovery handbook (1st ed.). Springer. Merceron, A., & Yacef, K. (2003). A Web-based tutoring tool with mining facilities to improve learning and teaching. In 11th International Conference on Artificial Intelligence in Education (pp. 41-52). Michalsky, R.S., & Stepp, R.E. (1983). Automated construction of classifications: Conceptual clustering versus numerical taxonomy. IEEE Transactions on Pattern Analysis and Machine Intelligence, 5(4), 396-410. Minaei, B., Kashy, D.A., Kortemeyer, G., & Punch, W. (2003). Predicting student performance: An application of data mining methods with an educational Web-based system. In Proceedings of 33rd Frontiers in Education Conference.
Mor, E., & Minguillón, J. (2004). E-learning personalization based on itineraries and long-term navigational behavior. In Thirteenth World Web Conference (pp. 264-265). Pils, C., Roussaki, L., & Strimpakou, M. (2006). Location-based context retrieval and filtering. Lecture Notes in Computer Science, 3987, 256273. Piramuthu, S. (2005). Knowledge-based Web-enabled agents and intelligent tutoring systems. IEEE Transactions on Education, 48(4), 750-756. Pituchs, K.A., & Lee, Y.-K. (2006). The influence of system characteristics on e-learning use. Computers & Education, 47(2), 222-244. Puntambekar, S. (2006). Analyzing collaborative interactions: Divergence, shared understanding and construction of knowledge. Computers & Education, 47(3), 332-351. Quinlan, R.J (1992). C4.5: Programs form machine learning. San Mateo, CA: Morgan Kaufmann. Reilly, R. (2005). Guest editorial Web-based instruction: Doing things better and doing better things. IEEE Transactions on Education, 48(4), 565-566. Romero, C., Ventura, S., De Bra, P., & Castro, C. (2003). Discovering prediction rules in AHA! courses. In 9th International Conference on User Modeling (pp. 25-34). Salazar, A., Gosalbez, J., Bosch, I., Miralles, R., & Vergara, L. (2004). A case study of knowledge discovery on academic achievement, student desertion and student retention. In IEEE 2th International Conference on Information Technology: Research and Education (pp. 150-154). Schellens, T., & Valcke, M. (2006). Fostering knowledge construction in university students through asynchronous discussion groups. Computers & Education, 46(4), 349-370.
Knowledge Discovery from E-Learning Activities
Scott, D.W., & Sain, S.R. (2004). Multi-dimensional density estimation. In C. R. Rao, E. J. Wegman & J. L. Solka (Eds.), Handbook of Statistics, Data Mining and Computational Statistics, Vol. 24, (pp. 229-261). Elsevier. Selim, H. (2007). Critical success factors for elearning acceptance: Confirmatory factor models. Computers & Education, 49(2), 396-413. Shee, D., & Wang, Y. (in press). Multi-criteria evaluation of the Web-based e-learning system: A methodology based on learner satisfaction and its applications. Computers & Education. Shin, N., & Kim, J. (1999). An exploration of learner progress and dropout in Korea National Open University. Distance Education an International Journal, 20, 81-97. Silvescu, A., Reinoso-Castillo, J., & Honavar, V. (2001). Ontology-driven information extraction and knowledge acquisition from heterogeneous, distributed, autonomous biological data sources. In International Joint Conferences on Artificial Intelligence (IJCAI) (pp. 1-10). Sowa, J.F. (2000). Knowledge representation: Logical, philosophical and computational foundations. Pacific Grove, CA: Brooks Cole. Srivastava, J., Cooley, R., Deshpande, M., & Tan, P. (2000). Web usage mining: Discovery and applications of usage patterns from web data. In SIGKDD Explorations (pp. 12-23). Stephenson, J.E., Brown, C., & Griffin, D.K. (in press). Electronic delivery of lectures in the university environment: An empirical comparasion of three delivery styles. Computers & Education. Sun, P., Tsai, R., Finger, G., Chen, Y., & Yeh, D. (in press). What drives a successful e-learning? An empirical investigation of the critical factors influencing learner satisfaction. Computers & Education.
Uschold, M., King, M., Morales, S., & Zorgios, Y. (1998). The enterprise ontology. Knowledge Engineering Review, 13, 32-89. Weigand, H. (1997). Multilingual ontology-based lexicon for news filtering. In IJCAI Workshop on Multilingual Ontologies (pp. 138-159). Xenos, M. (2004). Prediction and assessment of student behaviour in open and distance education in computers using Bayesian networks. Computers & Education, 43(4), 345-359. Zang, W., & Lin, F. (2003). Investigation of Web-based teaching and learning by boosting algorithms. In IEEE International Conference on Information Technology: Research and Education (pp. 445-449). Ziehe, A., & Müller, K.R. (1998). TDSEP-an efficient algorithm for blind separation using time structure. In 8th International Conference on Artificial Neural Networks (pp. 675-680).
AddItIonAL reAdIng data mining and knowledge Discovery Chi, X., & Spedding, T.A. (2006). A Web-based intelligent virtual learning environment for industrial continous improvement. In IEEE 4th International Conference on Industrial Informatics (pp. 1102-1107). Hammouda, K., & Kamel, M. (2006). Data mining in e-learning. In S. Pierre (Ed.), E-learning networked environments and architectures: A knowledge processing perspective. Springer Book Series: Advanced Information and Knowledge Processing. Han, J., & Kamber, M. (2001). Data mining concepts and techniques. Academic Press.
Knowledge Discovery from E-Learning Activities
Markou, M., & Singh, S. (2003). Novelty detection: A review. Part 1: Statistical approaches. Signal Processing, 83(12), 2481-2497. Yang, Y., Wu, X., & Zhu, X. (2006). Mining in anticipation for concept change: Proactive-reactive prediction in data streams. Data Mining and Knowledge Discovery, 13(3), 261-289.
ontologies Berners-Lee, T., Hendler, J., & Lassila, O. (2001). The semantic Web: A new form of Web content that is meaningful to computers will unleash a revolution of new possibilities. Scientific American Magazine, 284(5), 34-43. Cardoso, J., & Sheth, A. (2002). Semantic eworkflow composition. Journal of Intelligent Information Systems, 21(3), 191-225. Ceccaroni, L., & Ribiere, M. (2002). Experiences in modeling agentcities utility-ontologies with a collaborative approach. Paper presented at the Ontologies in Agent Systems Workshop, Autonomous Agents and Multi-Agent Systems Conference. Davies, J., Studer, R., & Warren, P. (2006). Semantic Web technologies: Trends and research in ontology-based systems. Wiley. Sowa, J.F. (2006). Categorization in cognitive computer science. In H. Cohen & C. Lefebvre (Eds.), Handbook of categorization in cognitive science (pp. 141-163). Elsevier. Sowa, J.F. (2006). A dynamic theory of ontology. In B. Bennett & C. Fellbaum (Eds.), Formal ontology in information systems (pp. 204-213). IOS Press.
Pattern recognition Langseth, H., & Nielsen T.D. (2005). Latent classification models. Machine Learning, 59(3), 237-265. Lee T.W., Lewicki, M.S., & Sejnowski, T.J. (2000). ICA mixture models for unsupervised classification of non-gaussian classes and automatic context switching in blind signal separation. IEEE Transactions on Pattern Analysis & Machine Intelligence, 22(10), 1078-1089. Salazar, A., Igual, J., Vergara, L., & Serrano, A. (in press). Learning hierarchies from ICA mixtures. Paper presented at the International Joint Conference on Neural Networks. Vergara, L., Salazar, A., Igual, J., & Serrano, A. (2006). Data clustering methods based on mixture of independent component analyzers. Paper presented at the ICA Research Network International Workshop, ICArn (pp. 127-130). Webb, A.R. (2002). Statistical pattern recognition. John Wiley and Sons.
education Butler, K.A. (1990). Learning and teaching style: In theory and practice (2nd ed.). Gregorc Associates, Incorporated. Entwistle, N. (1990). Styles of learning and teaching - an integrated outline of educational psychology for students, teachers, and lecturers. Fulton, David Publishers. Entwistle, N., & Peterson, E. (2004). Conceptions of learning and knowledge in higher education: Relationships with study behaviour and influences of learning environments. International Journal of Educational Research, 41(6), 407-428.
Knowledge Discovery from E-Learning Activities
e-Learning Chen, C.M., Lee, H.M., & Chen, Y.H. (2005). Personalized e-learning system using item response theory. Computers & Education, 44(3), 237-255. Conole, G., Dyke, M., Oliver, M., & Seale, J. (2004). Mapping pedagogy and tools for effective learning design. Computers & Education, 43(1-2), 17-33. Driscoll, M. (2002). Web-based training: Designing e-learning experiences, Pfeiffer; 2 Har/Cdr edition. Figini, S., Baldini, P., & Giudici, P. (2006). Nonparametric approaches for e-learning data. Lecture Notes in Computer Science, 4065, 548-560. Shih, T.K., Wang, T.H., Chang, C.Y., Kao, T.C., & Hamilton, D. (2007). Ubiquitous e-learning with multimodal multimedia devices. IEEE transactions on multimedia, 9(3), 487-499. Watkins, R. (2005). 75 e-learning activities: Making online learning interactive, Pfeiffer; Har/Cdr edition.
If we assume a nonparametric model for p(y), we can estimate the source pdf’s from a set of training samples obtained from the original dataset using (2). We propose a kernel density estimation technique (Scott & Sain, 2004; Duda, Hart, & Stork, 2000), where the marginal distribution of a reconstructed component is approximated as:
p (ym )= a ⋅ ∑ e
(n ') 1 y − y − m m 2 2 h
, m = 1...M
(5)
n'
where a is a normalization constant and h is a constant that adjust the degree of smoothing of the estimated pdf. The learning algorithm can be derived using the maximum-likelihood estimation. In a probability context it is usual to maximize the log-likelihood of (3) with respect to the unknown matrix W:
L (W )
log det W . p (y )
=
W log det W
=
+
W
W log p (y ) W
(6)
.
where
log det W W
= (WT )
−1
(7)
Annex 1 Imposing independence on y and using (4): This annex contains the formulation of the parametric ICA algorithm applied in the chapter. The probability density function of the data x can be expressed as:
log p (y ) W
M
p(x) = | det W|p(y)
(3)
where p(y) can be expressed as the product of the marginal distributions since it is the estimate of the independent components: M
p (y )= ∏ pi ( yi ) i =1
(4)
∑
m =1
M
log p (ym )
m =1
W
=∑
(8) =
p (ym ) M p (ym ) ym 1 1 =∑ . ym W p (ym ) W m =1 p (ym )
where using (5): (9) p (ym ) ym
= −a ∑ e n'
(n ') 1 y −y − m m 2 2 h
ym − ym(n ') 1 , m = 1...M h h
Knowledge Discovery from E-Learning Activities
Let us call w Tm the m-th row of W. Then ym = w Tm x, then
ym = Mm W
(10)
W
)=
(11)
∑ f (y )M m =1
m
= f (y )xT
(13)
Using the results of (7) and (13) we may finally write (6) as:
L (W ) W
M
= (WT ) + f (y )xT −1
(14)
m
Then we can apply (14) in the gradient updating algorithm to iteratively find the optimum matrix
where n' 1 y − y( ) − m m y ⋅ e 2 h 2 m 1 ∑ f (ym )= 2 n ' − ym n ') ( 1 y −y h − m m 2 2 h ∑e n'
log p (y ) W
where Mm (l,l' = d(l – m)xl' Substituting (5), (9) and (10) in (8) we have: log p (y
Considering the vector f(y) = [ f(y1) f(y2)...f(yM)]T we can write
(12)
W (i + 1) = W (i )+
L (W ) W
(i )
(15)
Chapter XI
Swarm-Based Techniques in E-Learning: Methodologies and Experiences Sergio Gutiérrez University Carlos III of Madrid, Spain Abelardo Pardo University Carlos III of Madrid, Spain
ABstrAct This chapter provides an overview of the use of swarm-intelligence techniques in the field of e-learning. Swarm intelligence is an artificial intelligence technique inspired by the behavior of social insects. Taking into account that the Internet connects a high number of users with a negligible delay, some of those techniques can be combined with sociology concepts and applied to e-learning. The chapter analyzes several of such applications and exposes their strong and weak points. The authors hope that understanding the concepts used in the applications described in the chapter will not only inform researchers about an emerging trend, but also provide with interesting ideas that can be applied and combined with any e-learning system.
IntroductIon The World Wide Web does not offer only access to information. It connects many people all around the world in a very short time. New applications, inspired in natural processes (like those that allow social insects to work together) are appearing.
Many research efforts are trying to take advantage of two fundamental characteristics of the Internet: small delays in communications (independent of physical location) and a big number of users. Social systems try to emulate the behavior of social groups in real life. These systems extract some information from the behavior of
Copyright © 2008, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
Swarm-Based Techniques in E-Learning
the group and use it to get some benefit for the students. In other words, they take advantage of the interactions between the different members of the learning community to help each of its members. This chapter aims at providing the reader with a broad overview of the state of the art on this emerging field, specifically regarding the problems of sequencing and filtering.
BAckground A swarm may be defined as a population of interacting elements that is able to optimize some global objective through collaborative search of a space (Kennedy & Eberhart, 2001). The elements may be very simple machines or very complex living beings, but there are two restrictions to be observed: they are limited to local interactions and usually interaction is not performed directly but indirectly through the environment. The property that makes swarms interesting is their self-organizing behavior; in other words, it is the fact that a lot of simple processes can lead to complex results. The behavior of ants is the best-known example of swarm intelligence. In many ant species, ants deposit a chemical substance called pheromone as they move from a food source to the nest. Ants do not communicate directly with each other, but they follow pheromone trails (leaving their own pheromones behind, so the trail is reinforced). Shorter trails are more strongly reinforced, as the ants cross them more times for the same period of time, so they are followed by more ants. In the end, this positive feedback loop ends up in the path connecting the food source and the nest being optimized without any global knowledge of the problem by any of the agents. This process of indirect communication in a swarm is called stigmergy (Bonabeau, Dorigo, & Theraulaz, 1999). Another example of a stigmergic process is nest building by termites, in which the insects are able to construct complex buildings with arcs,
00
hallways, and ventilation systems following very simple and local rules. Swarm intelligence is a growing field of active research, and its applications outside the Internet are manifold. Swarm intelligence techniques have been applied to many different kinds of problems. Examples include packet routing (Dorigo & Stützle, 2004), graph coloring (Costa, Hertz, & Dubious, 1995), allocating tasks for robots in a factory (Morley, 1996), routing a fleet of trucks (Gambardella, Taillard, & Agazzi, 1999), as well as many robotic applications (Bonabeau et al., 1999). It is common to use the term ant colony optimization (ACO) for the set of heuristics used on these problems. The Internet allows for low-delay communications for big amounts of people. Communication can be direct (like it is the case for IP-telephony or chats) or indirect (like Internet polls or file interchange), and there are many intermediate cases (like Internet fora). Big number of participants, indirect communication, local awareness, and local actions define a setting for the appearing of emergent behaviors of a swarm by combination of the activities of all its members.
coLLABorAtIve seQuencIng The problem of adaptive sequencing is that of, given a set of learning activities, finding the best sequence for a particular student. Given a proper user model, a system can adapt the sequence of learning activities to each student. Unfortunately, most approaches share a common weakness. The role of a human designer is very important and a mistake on her part affects the whole system. The negative effect on students may vary. Human mistakes may be avoidable (e.g., errors when typing names of activities), but the sequencing designing process is bound to some unavoidable problems. First, teachers have a limited capability and a limited knowledge about activities that may be of interest for their students.
Swarm-Based Techniques in E-Learning
If the sequencing strategy depends entirely on the teacher, those activities that the teacher does not know will never be accessed by the students. Moreover, students evolve over time (e.g., changes in formal curricula, different access to media, and knowledge sources, etc.). A sequencing design that works very well today may not be as good tomorrow. This cannot be addressed by the teacher alone (Gutiérrez, Pardo, & Delgado Kloos, 2006). Three different approaches are presented in this section, but all of them try to extract some information of the behavior of the group of students and use it to improve their sequencing strategy.
the Paraschool system Paraschool is the leading e-learning company in France. It offers its services to several thousands students in the country. It offers support on many subjects, online exercises, correction, and feedback. Their system uses techniques based on ACO in order to detect bad pedagogical planifications and to correct them automatically (Valigiani, Lutton, Jamont, Biojout, & Collet, 2006). The Paraschool system divides the activities into courses and chapters. Courses can range from a short training (e.g., course on security when using heavy machinery) to a full academic year at a school (e.g., fourth grade class). Courses are divided into chapters. Inside each chapter, a graph of activities is defined. Every node in this graph is an activity, typically composed of a theory Web page, an exercise that illustrates the concepts presented, and a final page that corrects the answer of the student and offers her some feedback. Edges of the graph represent possible transitions after a node has been covered. Nodes are not necessarily connected to themselves. Probabilities are associated to every arc. These probabilities determine which will be the next learning unit to be delivered to the student. These probabilities are initially set by a pedagogical team of teachers, as pedagogical weights. Taking into account the edges that connect the
nodes and their probabilities, the sequencing of the students is specified stochastically (Semet, Lutton, & Collet, 2003). As such, the graph defines possible sequencings of the activities stochastically, but its definition is static (the pedagogical weights do not change). Thus, the system can suffer from the problems stated before. Thus, the Paraschool system takes an approach inspired by the ACO paradigm. Every student traversing the graph acts like an ant. As she interacts with the exercises, she leaves pheromones on the arcs that influence their probabilities of being chosen. These pheromones can be of two types: positive or negative. Positive pheromones are left when the student solves correctly an exercise, negative pheromones are left when the student fails. Combining both pheromones and the pedagogical weights, a fitness function is calculated for every arc. After an exercise has been completed by a student, all outgoing arcs are sorted by fitness and one is selected by stochastic tournament (Semet, Lutton, & Collet, 2003). That arc is “followed” by the student and the next activity is shown. In order to enforce the notion of learning path, pheromones are laid on the last four arcs traversed by the student. Older arcs receive a smaller amount of pheromone. As pheromones are deposited on the arcs, those arcs that are part of successful sequences of exercises are reinforced, while those that lead to failure are depleted. Over time, the system is able to refine the original design and even correct bad pedagogical designs (Gutierrez, Valigiani, Jamont, Collet, & Delgado Kloos, 2007). Pheromone values are decreased over time (much in the same way natural pheromones evaporate on air) in order to give the system responsively to changes. First versions of the Paraschool systems used natural time, but this approach presented problems of seasonality and was difficult to calibrate (Valigiani, Jamont, Biojout, Lutton, & Collet, 2005). Thus, a sort of user-based time was used. Pheromone values on
0
Swarm-Based Techniques in E-Learning
arcs are only updated when a user goes through that arc or through a neighbour (those arcs that lead to the same exercise). When a student finishes exercise A, positive and negative pheromones are laid on the last four arcs that she followed. Thus, the pheromones on those arcs are increased. At the same time, pheromone values on all the other arcs that lead to the same exercise are decreased. This can be seen as competition between arcs, as those arcs that are used more often will have high pheromones values, while those that are seldom used will see that their pheromones level “evaporate” rapidly. The work by Valigliani (2005) has shown several differences between the application of ant-inspired heuristics to technical problems (like those referenced in the background section), in which “ants” are little software agents, and their application to social problems, in which real people are used as “ants.” First, real students are not as altruistic as ants (either natural or artificial) are: students will not willfully sacrifice themselves for the good of the community. In other words, a student does not expect to be lost in the exercise graph for the sake of exploration. All students want to learn. They are not interested in optimizing the results of the group; they are interested in optimizing their own results. This is only common sense, but it means that there are aspects of the ant-colony optimization heuristics that cannot be used or do not perform as expected. Secondly, the process of learning is one that must be optimized for each learner. The fact that a pedagogical path is optimized for a group does not mean that it is optimized for each of its members. In order to give an additional level of adaptation at a personal level, personal pheromones are used. These pheromones are left by each user as the user traverses the graph, and prevents the user from repeating the same exercises over and over again. Personal pheromones are laid by use and evaporate with natural time. They are a multiplicative factor in the fitness function.
0
A final characteristic of the Paraschool system is its ability to create new arcs that connect its exercises, apart from those that were set by the pedagogical team. Paraschool users can navigate in a guided fashion (as explained above) or freely. Free navigation means that the students can access any of the exercises that are available on the site. Every time this happens, the system records the transition and creates a new arc from the last exercise performed by the student to the exercise that she has gone to. From that point on, that arc is considered like a normal arc of the graph (only with no starting pedagogical weight) that is selectable in normal guided navigation, receive pheromones, and so forth. This is an interesting feature that allows finding learning sequences that the pedagogical team had not thought about in the first place. Moreover, it opens the door to new exercises to be added to the system even if no pedagogical team creates a graph for them. As connections between the arcs will appear from free navigation, the algorithm will assign a fitness to those arcs over time, as pheromones are deposited on them with regard of success or failure of the students traversing them. In the end, the new set of exercises will auto-organize themselves.
Learning networks Learning networks (Koper, 2005) are flexible learning facilities oriented at supporting the needs of learners at various levels of competence throughout their lives. They support ubiquitous access to learning facilities at work or at home. Learning networks consist of learning events (called activity nodes) in a given domain. Activity nodes can be associated to anything that supports learning (e.g., Web resource, course, workshop, etc.). Both providers and learners can create new activity nodes or adapt existing ones (even deleting nodes). Thus, a learning network represents a large and ever-changing set of activity nodes that provide learning activities for learners, from different providers and at different levels of expertise.
Swarm-Based Techniques in E-Learning
In a learning network, users have a goal. This is a description of the level of competence that they want to achieve in any particular domain. For achieving goals, a learner can follow one route, sometimes more. Additionally, users in a learning network are described by their learning track and their position. The learning track is defined as the set of activity nodes that have already been completed. The learning position includes the learning track and all those activity nodes than can be considered as completed, either because they are related to some node in the learning track or because of previous study or work experience on the part of the student. This is a complex problem that is studied in (van Brugen et al., 2004) along with some approaches to it. As it can be expected from any life-long learning scenario, learning networks can give access to a big number of activities and resources. This brings a big complexity that might hinder the learning process of students. They may find it hard to gain an overview of the number of modules and the best sequence to study them. In order to help learners during the cognitive, decision-making process required for choosing and sequencing their learning events, Tattersall et al. (2005) take an inspiration of how ants interact indirectly using pheromones to optimize the path from their nest to a food source. The system allows learners to select from a list of the learning goals in a learning network. It also allows, thereby, identification of the route to that goal. Learners’ interactions are stored in a log, including information of the learner, the activity node, a timestamp and an indication of performance. Learners’ interactions with learning resources and activities are recorded automatically as they progress through a body of knowledge, identifying sequences. These sequences can be processed and aggregated to derive a sort of “pheromone strength,” so those paths where more learners have been successful are favoured. Using information on the tracks of all learners, a transition matrix can be calculated (Deshpande
et al., 2004). This matrix indicates, for each activity node, how many learners have successfully progressed from that node to the following one. This information can be fed back to other learners providing some guidance for their navigation in an indirect fashion. The algorithm for selecting the next activity node for a user that has just completed an activity is as follows. First, the set of activity nodes that have to be completed is calculated. After that, the success information for each of them (starting from the current node) is retrieved from the transition matrix. Then, using that information as weights, the next activity is chosen randomly. An example will clarify this point. Let us imagine a student that has just finished activity node A. For achieving her goal, the student has to complete activity nodes B, C, D, and E. From the transition matrix, we can get how many students moved from A to each of these activity nodes and were successful completing them. (The definition of success may vary between nodes: it may be a Boolean quality, for example, answering correctly one question, or a measure of different parameters, for example, answering correctly 60% of the questions or do it in a time lower than some threshold, and so forth). Data at the matrix says that 5, 2, 1, and 2 students were successful completing B, C, D, and E, respectively. Thus, the next activity node recommended for the student selected will be B with a 50% probability, C with a 20%, and so forth. The result is that the most frequently and succesfully followed path has a higher probability of being selected. To prevent suboptimal convergence to this path (stagnation), there is a chance that the other paths are selected. This navigation support is designated to facilitate planning decisions and reduce the risk of information overload by offering accessible information. This information is learner centered, as it is related to the learner’s present position (i.e., his knowledge level in the current learning network). As the feedback makes use of success rates, it helps learners to make better choices based on sequences tried and tested by their peers. 0
Swarm-Based Techniques in E-Learning
sIt SIT (Gutiérrez, Pardo, & Delgado Kloos, 2006b) is a generic platform for the development of intelligent tutoring systems. The platform offers several services to the tutoring system designer, from authentication facilities to automatic retrieval and forwarding of Web resources. Intelligent tutoring systems built on top of SIT do only have to take care of the user model, the domain model and the learning strategies they want to develop (Hartley, 1973). Most administrative tasks are performed by SIT. It has been calculated that a big amount of the cost of developing an intelligent tutoring system has to do with dealing with administrative tasks (Murray, 1999). SIT permits to focus on the teaching and learning aspects of development. SIT puts the focus on adapting the sequence of learning resources to the user. Thus, tutoring systems built on top of SIT must implement some model of the domain they are tutoring and some sequencing strategy that tailors the sequence of activities to the student’s capabilities and needs. It is the responsibility of the tutoring system to know where to find the appropriate learning resources. In the context of SIT, resources are identified by an URL. Resources can be created by the same pedagogical team than creates the tutoring system or they can use some resources already available on the Web. SIT encourages reuse of already existing learning resources as it has no resources on its own, but can retrieve and deliver any resource to the students. They can be static resources, like images, or dynamic resources, like PHP pages that perform a full remediation cycle of showing an exercise, getting the student’s answers and showing some feedback adapted to her answers. It is the responsibility of the tutoring system to analyse the input from the user, if any, as well as all former knowledge about her. Using that information, the tutoring system decides what is the next learning activity or resource to be sequenced to the student. This information is given
0
to SIT and the platform takes the corresponding resource and forwards it to the student. SIT wraps the resources in a brief user interface that allows the student to log out or continue the tutoring process after she has interacted with the learning resource (e.g., read some text, solved an exercise, etc.). If the student logs out, the session is restarted at the same point the next time she logs into the system. If the student continues, all data is collected by SIT and forwarded to the tutoring system and the cycle restarts. If only one activity is selected by the tutoring system as suitable for the student, that is the activity delivered to the student. But it may be the case that the tutoring system decides that there are several activities that are suitable for the student, given current knowledge about her. When this happens, SIT shows a menu to the student with all the options, so the student can decide which is the next preferred activity based on the information given by the tutoring system and forwarded by SIT (typically, a title or short description of every option). It is this last characteristic of SIT that is important from the point of view of this chapter, as SIT uses some techniques inspired on the mechanisms that ants use for optimizing the paths from their nests to food sources (as explained above) to help students in their decision. Using a method similar to that of the learning networks explained above, SIT collects information about the success of students when interacting with activities and resources following a particular sequence. It then feeds back some of this information to students (Gutierrez, Pardo, & Delgado Kloos, 2006a). If the administrator of the system allows it (this can be selected for some specific tutor, or just for some learning modules, etc.), SIT collects information about the success of the students when interacting with those learning activities delivered to them. Then, when different learning activities are chosen for the student and a menu is presented to her, SIT shows information about how her peers have performed on those activities
Swarm-Based Techniques in E-Learning
when they where in the same state as she is now. The state is defined as the last activity or sequence of activities performed by the student. Information is presented as a ratio for each available activity in the menu: n students tried this activity when they were in the same state as you are now, m were successful. If there is no information about the success, or if that information makes no sense (e.g., if an activity consists on looking at a photograph for five minutes or reading a piece of text, there may be no concept of “success”), only the first half of the information is shown: n students tried this activity when they were in the same state as you are now. It may be the case that some activities are presented with a success ratio while others are not. Students then have the possibility of selecting an activity using the information they have about their peers. Students that are conscious of their relative competence will select those activities that seem to be harder, looking for bigger challenges or trying to avoid the longer and more boring sequences of learning material. Students that find the learning material more difficult will choose the longer but easier path to achieve the same learning goals. In effect, the information about the results of their peers places the students in a metacognitive state in which they do have to think about their relative performance compared with other students as much as their level of skill in a particular domain. This makes the system specially suited as a complement to classical teacher-classroom-student scenarios, in which the students interact frequently with each other and have a more accurate picture of their relative skill level compared to the rest of the group. The system is valid for distance learning as well, as the comparative position with respect to the group can be inferred as the learning process goes on. A student who finds that she is successful where many others have failed will probably look for harder challenges and vice versa. This approach discriminates between students by skill level in a sort of stigmergic development.
There is no need to mark the student with any tag at the beginning of the process, nor is there any need for a complex student modelling guiding this classification (although this auto-organizative procedure is complementary to any student modelling performed by the intelligent tutoring system built on top of SIT). Students situate themselves in their appropriate level of skill following the traces left by others. It is important to note that this process has two virtues: it is automatic and auto-organizative, and it is flexible. Students that perform very well on the first activities of a module may not do so later, and the system can adapt to them at every moment. From a certain point of view, there can be as many levels as students, although this is dependant of the sequencing strategy of the tutoring system. In other words, the “skill levels” of the students are fractal: after each and every step, some of the activities present greater challenges than others and students have to choose at every moment which path they want to follow. It is obvious that the technique used by SIT bears many similarities to that of the learning networks, but there are several differences. The first difference is that SIT does not have any resources of its own, but relies on forwarding external resources as directed by the tutoring system. This flexible approach is a two-ended sword, as it means that information about success on the activities is not always available. If the auto-organizative feature of SIT is to be used, resources and activities have to be adapted so that they can send that information to the platform (this is performed using a tag on the HTTP request). Without success or failure information, SIT is only able to show the number of students that have performed one or another activity and most of the advantages of the stigmergic process are lost. Another difference is that the learner plays a central part in the adapting and stigmergic processes. The “pheromone” information is shown directly to the student, and it is the student who makes the decision, not the system. This is much
0
Swarm-Based Techniques in E-Learning
more flexible than the former approach and allows for the formation of several skill levels. When only one activity is recommended, most students (both skillful and modest) will tend to follow the same path, leading to a Stalinist regime (Kauffman, 1996); even those that rebel to a path that may not be suited to them and chose any other, will be lost in the absence of any other guidance and their traces will not be significant. Presenting different options with different information in each case allows the users to follow different paths according to their capabilities. In all cases, the traces left behind are significant to other students that come afterwards. A third difference is the definition of state of the learner. The state of the student is the last activity or sequence of activities that has been completed by the student. The length of the sequence is a parameter to be set by the administrator of the system. Longer sequences mean more precise adaptation between the cognitive state of the peers (inferred from the information shown) and the cognitive state of the student. Shorter sequences mean that it is easier (i.e., takes less time) to get results that are meaningful to the student. This a common problem in most social system and is referred in the literature as the cold start problem. In the case of the learning networks, success is calculated only from the current activity node to the next one. SIT can calculate longer sequences of activities from the past history of the student in order to give more precise information. Related to the former two issues, there is the fact the SIT calculates the success information in relative terms. Observing the case where the state of the student is of length 1 (as in the learning networks), SIT has not only one transition matrix, but two: one stores the successful transitions from the current activity to all the others, while the second one stores total transitions. In the case of the learning networks, it is not important if 5 or 500 students have tried to complete activity node B after completing A. With the SIT approach, both numbers are important and it is their joint
0
information that shows that transition is -probably- a very bad one.
coLLABorAtIve FILterIng Over the last years, the growth of e-commerce has stimulated the use of collaborating filtering systems as recommender systems. The goal of a modern collaborative filtering system may be stated as predicting the utility of a certain item for a particular user based on the user’s previous likings and the opinions of other like-minded users. Collaborative filtering is based on the premise that people looking for information should be able to make use of what others have already found and evaluated. In a way, collaborative filtering systems are organisers of knowledge: the preferences of the users and their processing create the classification scheme. Modern collaborative filtering system can be classified into memory-based and model-based. The first ones employ a user-item database to generate a prediction. These systems use statistical techniques to find a set of users (neighbours) that have a similar profile of agreeing with the target user (Pennock, Horvitz, Lawrence, & Lee, 2000). Model-based collaborative filtering algorithms provide item recommendation by first developing a model of user ratings. Algorithms in this category take a probabilistic approach and envision the collaborative filtering process as computing the expected value of a user prediction, given the user’s ratings on other items. The model building process is performed by different techniques such as Bayesian networks (Miyahara & Pazzan, 2000), latent semantic analysis (Hofmann, 2003), and rule-based approaches (Boley, 2003). There have been some collaborative filtering systems explicitly designed for assistance in the learning process, like PHOAKS (www.phoaks. com) or LON-CAPA, but the most interesting for the scope of this chapter is CoFIND. Not only is it a collaborative filtering system and it
Swarm-Based Techniques in E-Learning
is focused on learning, specifically, in providing some of the “features” of a teacher (Dron, 2002). It is also designed to produce auto-organization and stigmergic processes as the swarm of students interact with it.
coFInd CoFIND (Dron, Mitchell, Siviter, & Boyne, 1999) is a collaborative filtering system that tries to help students to generate and use a list of learning resources. The resources are to be found by the students and organized automatically. This organization is not the result of some predefined rules by the tool designer or a pedagogical team, but it is the result of the interactions of the students with the resources they and their peers have provided, and between each other. CoFIND aims to facilitate the communication between the learners (being this communication direct or indirect) as well as providing information as an expert in one subject. This information is structured in a way that it is more easily assimilated by the learner. In order to reach that goal, the system makes use of the votes of the users for the resources (and their qualities) and the usage that the users make of them. There are four important concepts that are central in CoFIND: resources, qualities, votes, and topics. Resources are identified with a URL. Both qualities and topics describe resources. Qualities are metadata tags that indicate the opinion of a learner about a resource (e.g., interesting, bad explained, amusing, etc.). Topics provide thematic clustering of resources. Looking for auto-organization, CoFIND takes an evolutionary approach in which resources are to compete with each other. The best fitted (in the sense of most useful for the students) will survive and the other will disappear. As it is the case for resources, qualities are created by the users of the system. There are no constraints of what can be a quality. “Helpful,” “written properly,” or “low level” are all examples
of qualities, but “with purple letters” is as well. In order to prevent the system to be flooded with qualities of little or no interest to the users, qualities disappear from the system if they are not used by users for a certain period of time. This produces a competition between the qualities, in which only those that are useful (and used) continue on the system. Successful qualities are those that are used or rated (see below) more often. They are presented at first places of the list of qualities. Not successful qualities are pushed down the list by the others, until they finally disappear because nobody in interested in them anymore. In the early versions of CoFIND, qualities disappeared from the system if they had not been used for a week. As it was the case when we talked about the Paraschool system, the approach of using natural time for the evolution of the system presented several problems, nonetheless the fact that if the system was not used for a week, all qualities (including useful ones) disappeared from it. Thus, more recent versions of CoFIND use a weighting algorithm that uses a kind of user-based time. New qualities start with a weighting that is equivalent to a count of the largest number of rating given to any quality currently in the system. Whenever a quality is chosen by any user, its weighting increases by one. When a user logs into the system, the weightings from all the qualities are reduced by one. Thus, those qualities that are not used will see their weightings go down to zero in a system running normally. On the other hand, if the system is not heavily used for any reason (e.g., summer holidays) it preserves its state. Later versions modified as well the user interface related to qualities. Those qualities that have been used more often to describe resources appear on the list using a larger font. The user of the system has therefore two kinds of information when selecting or searching for a quality. Position on the list expresses the weighting: those qualities that are up the list have been used recently or they are of new creation. Size of the font used express how many resources have been already described
0
Swarm-Based Techniques in E-Learning
with this quality. In other words, the size of the quality is an indicator of how many resources can be found using CoFIND when searching for that particular quality, while position is an indicator of the usefulness of the quality for other users. It is important to note that the number of rated resources is a piece of explicit knowledge, but the usefulness of a quality is implicit knowledge generated from user statistics. Another interesting point here is that the quality alone does not describe the resource: a resource may be described by a quality with a lot of negative votes (e.g., a boring document described with the quality “amusing”). When users look for resources that have been described with a particular quality, they have the opportunity to express their agreement or disagreement with the selection provided. This is performed through a voting process, in which the users can express how much they agree with the description. The users vote using a discreet scale of six steps, to encourage positioning on the part of students, avoiding the “middle point.” For example, if a student looks for “good for beginners” resources and obtains a very complex one, she may express that she totally disagrees with the “good for beginners” quality (for that particular resource). This rating affects the positioning of resources, as explained now. Resources are displayed in order of the average of the ratings. On its own, this would mean that a resource that has been rated once could be considered as more successful than one that has been rated many times, although one single rating might be strongly atypical. Therefore, to give more importance to those resources that are rated more frequently, they are displayed before those that have been rated fewer times when the average rating is the same. Another concept of importance in CoFIND is that of topics. Topics are, in the end, binary classifications. Topics cover the necessity of creating some separation between both resources and
0
qualities. Both are topic-dependent (e.g., a quality like “about bridges” needs some clarification about the nature of the bridges to know if the resource is more relevant to civil or electronic engineers). In the same fashion as qualities and resources, topics are introduced by their own users. The users are presented by the system with a blank canvas divided in four quadrants. There, they can fill these sectors with topics of their choosing, selecting which of the sectors contains their topics. Therefore, the sectors will probably present some clustering of related topics. Topic names in the same sector compete with each other for the space. Each time a topic is selected, the font used for displaying its name is increased. The font used for the other topics is decreased. In order to avoid a stagnation phenomenon observed in the earlier versions of the system, the increase in size when selecting a topic is proportional to the number of neighbours (more neighbours mean more increase) while the decreasing of size for the other topics is proportional to the inverse of the number of topics (more neighbours, smaller decrease). For the sake of legibility, there is a maximum and a minimum bound to the size of the font. Thus, there is no real death of topics, they never disappear. This method presents some instability problems with old Web browsers that have very few font sizes to display (e.g., only three sizes for the FONT tag of HTML), but presents a good behavior for big number of topics. By relating growth and shrinkage of topic names to the number of competing topics, successful ones will continue to stand out when there are large populations as effectively as when there are small populations. The result of this process produces some sort of stigmergy, in which students select popular paths, thus increasing the success of those paths, amplifying the behavioral pattern of the group to provide an structure for the learning material. There is a compromise in the size of the community that can benefit from a system like
Swarm-Based Techniques in E-Learning
CoFIND. Big populations mean that the system will be helpful for more people, but a smaller group is more likely to make the system evolve faster. This is coherent with results from natural sciences which show that evolution tends to occur more rapidly in isolated populations (Gould, 1978). This leads to a useful mechanism for reducing the bloated excess of results that could be obtained from a typical search engine or directory. If a small group with a common learning interest compiles a set of resources (possibly found using search engines) there is a high probability that they are winning a higher relevance from the rest of the resources (found by their colleagues). The evolutionary model of CoFIND creates a context-dependent taxonomy which captures the usage of the group’s tacitly negotiated and agreed evaluations. Ambiguities and disagreements are not discussed but solved in a sort of indirect democratic process as users vote and rate the resources.
concLusIon The pervasive presence of the Internet allows for greater communication between different people than ever before, independently of their physical location. Although this situation brings its own risks, it opens the doors for many ways of collaboration between different members of a community. Taking some inspiration from the biology of social insects, several swarm-intelligence applications have appeared over the last years. Combining some of their know-how with some sociology concepts produces the appearance of social-swarm applications. This chapter has presented those that are focused on e-learning, showing both its strong points as their weaknesses.
Future reseArch dIrectIons The Internet is still far from its peak of development, and the effects of “social e-networks” is far from being fully comprehended. In the near future, many more applications built on the foundations of those presented here will develop. Social navigation, very related to social sequencing, is another emerging trend. Social navigation (Marten, Farzan, & Brusilovky, 2006) aims at using navigation information from the users to provide them with clues about where to move next. This clues might give information about the current position of users (e.g., which documents is the user accessing now) or about the path they have followed. These clues can relate to groups of users as well. For example, documents that are more frequently accessed can be highlighted in some form. As the users interact increasingly more between themselves (either consciously or not) during their learning process, their awareness about their environment becomes more important. Learning is known to be a social process that benefits greatly from the interaction with others, yet information technologies have traditionally limited learning to an impersonal paradigm. As technology is providing means for real communication, it becomes important to be aware of the environment with which the student interacts (who is doing what; when, where, etc.). Possible sources of information that can be relevant to the student are virtual location and near peers, peers activities and opinions, timing, and age of activities and actors, and so forth. Enhancing the awareness of the students about all these issues can play an important role in their learning. Besides, being aware of your peers allows you to interact with them directly. Finding the adequate equilibrium between direct and indirect communication (like in swarms) for promoting learning will be one the challenges in the future.
0
Swarm-Based Techniques in E-Learning
reFerences Boley, H. (2003). RACOFI: A rule-applying collaborative filtering system. In 2003 IEEE/WIC International Conference on Web Intelligence/ Intelligent Agent Technology. Bonabeau, E., Dorigo, M., & Theraulaz, G. (1999). Swarm intelligence: From natural to artificial Systems. NY: Oxford University Press Costa, D., Hertz, A., & Dubious, O. (1995). Embedding of a sequential algorithm within an evolutionary algorithm for coloring problems in graphs. Journal of Heuristics, 1, 105-128. Deshpande, M., & Karypis, G. (2004). Selective Markov models for predicting Web page accesses. ACM Transactions on Internet Technology (TOIT), 4(2), 163-184. Dorigo, M., & Stützle, T. (2004). Ant colony optimization. MIT Press. Dron, J. (2002). Achieving self-organisation in network-based learning environments. PhD doctoral dissertation. Dron, J., Mitchell, R., Siviter, P., & Boyne, C (1999). CoFIND: Experiment in n-dimensional collaborative filtering. In World Conference on the WWW and Internet (pp. 301-306). Gambardella, L.M, Taillard E., & Agazzi G. (1999). MACS-VRPTW: A multiple ant colony system for vehicle routing problems with time windows. New Ideas in Optimization, 63-76. Gould, S. J. (1978). Ever since Darwin - reflections in natural history. Burnett. Gutiérrez, S., Pardo, A., & Delgado Kloos, C. (2006a). A modular architecture for intelligent Web resource based tutoring systems. Intelligent Tutoring Systems, 753-755. Gutiérrez, S., Pardo, A., & Delgado Kloos, C. (2006b). Some ideas for the collaborative search
0
of the optimal learning path. In Adaptive Hypermedia 2006 (pp. 430-434). Gutiérrez, S., Valigiani, G., Jamont, Y., Collet, P., & Delgado Kloos, C. (2007). A swarm appoach for automatic auditing of pedagogical planning. In Proceedings of IEEE ICALT 2007 (pp. 136-138). Hartley, J., & Sleeman, D. (1973). Towards more intelligent teaching systems. International Journal of Man-Machine Studies, 2, 215-336. Hofmann, T. (2003). Collaborative filtering via Gaussian probabilistic latent semantic analysis. In 26th ACM SIGIR Conference on Research in Information Retrieval (pp. 259-266). Kauffman, S. (1996). At home in the universe: The search for the laws of self-organization and complexity. Oxford University Press. Kennedy, R., & Eberhart, R. (2001). Swarm intelligence. CA: Morgan Kaufmann/Academic Press. Koper, R. (2005). Designing learning networks for lifelong learners. In R. Koper & C. Tattersall (Eds.), Learning design: A handbook on modelling and delivering networked education and training (pp. 239-252). Mertens, R., Farzan, R., & Brusilovsky, P. (2006). Social navigation in Web lectures. In U. K. Wiil, P. J. Nürnberg & J. Rubart (Eds.), Proceedings of Hypertext Conference 2006. Miyahara, K., & Pazzani, M. (2000). Collaborative filtering with the simple Bayesian classifier. In Pacific Rim International Conference on Artificial Intelligence (pp. 679-689). Morley, R. (1996). Painting trucks at general motors: The effectiveness of a complexity-based approach. In Ernst and Young Center for Business Innovation, (Ed.), Embracing Complexity: Exploring the Application of Complex Adaptive Systems to Business (pp. 53-58). Cambridge, MA.
Swarm-Based Techniques in E-Learning
Murray, T. (1999). Authoring intelligent tutoring systems: An analysis for the state of the art. International Journal of Artificial Intelligence in Education, 10, 98-129. Pennock, D., Horvitz, E., Lawrence, S., & Lee Giles, C. (2000). Collaborative filtering by personality diagnosis: A hybrid memory- and modelbased approach. In 16th Conference on Uncertainty in Artificial Intelligence (pp. 481-488). Semet, Y., Lutton, E., & Collet, P. (2003). Ant colony optimization for e-learning: Observing the emergence of pedagogic suggestions. In Proceedings of the 2003 IEEE Swarm Intelligence Symposium. Tattersall, C., Manderveld, J., Van den Berg, B., Van Es, R., Janssen, J., & Koper, R. (2005). Swarmbased wayfinding support in open and distance learning. In E. M. Alkhalifa (Ed), Cognitively informed systems: Utilizing practical approaches to enrich information presentation and transfer (pp. 166-183). Valigiani, G., Jamont, Y., Biojout, R., Lutton E., & Collet, P. (2005). Experimenting with a realsize man-hill to optimize pedagogical paths. In H. Haddad, L. Liebrock, A. Omicini & R. Wainwright (Eds.), ACM Symposium on Applied Computing (pp. p4-8).
AddItIonAL reAdIng Dron, J. (2005). E-learning and the building habits of termites. Journal of Educational Multimedia and Hypermedia, 14(4), 321-342. Dron, J. (2006). Social software and the emergence of control. In 6th IEEE Conference on Advanced Learning Technologies (pp. 904-908). Dron, J. (2007). The safety of crowds. Journal of Interactive Learning Research, 18. Dron, J., Boyne, C., & Mitchell, R. (2001). Footpaths in the stuff swamp. In World Conference on the WWW and Internet (pp. 323-328). Dron, J., Boyne, C.W., & Mitchell, R. (2004). The evaluation of forms of assessment using ndimensional filtering. International Journal on E-Learning, 3(4). Dron, J., Mitchell, R., & Boyne, C.W. (2003). Evolving learning in the stuff swamp. In N. Patel (Ed.), Adaptive evolutionary information systems (pp. 21-218). Hershey, PA: Idea Group. Farzan, R., & Brusilovsky, P. (2006a). AnnotatEd: A social navigation and annotation service for Web-based educational resources. In World Conference on E-Learning.
Valigiani, G., Lutton, E., Jamont, Y., Biojout, R., & Collet, P. (2006). Automatic rating process to audit a man-hill. WSEAS Transactions on Advances in Engineering Education, 3(1), 1-7.
Farzan, R., & Brusilovsky, P. (2006b). Social navigation support in a course recommendation system. In V. Wade, H. Ashman & B. Smyth (Eds.), 4th International Conference on Adaptive Hypermedia (pp. 91-100).
van Bruggen, J., Sloep, P., Van Rosmalen, P., Brouns, F., Vogten, H., & Koper, R. (2004). Latent semantic analysis as a tool for learner positioning in learning networks for lifelong learning. British Journal of Educational Technology, 35(6), 729-738.
Gutiérrez, S. (2007). Sequencing of learning activities oriented towards reuse and auto-organization for intelligent tutoring systems. Unpublished doctoral disertation, University Carlos III of Madrid. Gutiérrez, S., Pardo, A., & Delgado Kloos, C.(2004). An adaptive tutoring system based on hierarchical graphs. In P. de Bra & W. Nejdl
Swarm-Based Techniques in E-Learning
(Eds.), 3rd Conference on Adaptive Hypermedia (pp. 401-404). Gutiérrez, S., Pardo, A., & Delgado Kloos, C. (2006). Finding a learning path: A swarm intelligence approach. In Proceedings of IASTED Web-Based Education. Janssen, J., Tattersall, C., Waterink, W., Van den Berg, B., Van Es, R., Bolman, C. et al. (2007). Self-organizing navigational support in lifelong learning: how predecessors can lead the way. Computers & Education, 49(3), 781-793. Klamma, R., Chatti, M. E., Duval, E., Hummel, H. G. K., Hvannberg, E. T., Kravcik, M., et al. (2007). Social software for life-long learning. Technology & Society, 10(3), 72-83. Kurhila, J., Miettinen, M., Nokelainen, P., & Tirri, H. (2002). EDUCO—A collaborative learning environment based on social navigation. In 2nd Conference on Adaptive Hypermedia. Lakoff, G. (1987). Women, fire, and dangerous things. IL: University of Chicago Press. Mobasher, B. (2004). Web usage mining and personalization. In M. P. Singh (Ed.), Practical handbook of Internet computing. Chapman & Hall/CRC Press. Prieto Linillos, P., Gutiérrez, S., Pardo, A., & Delgado Kloos, C. (2006). Sequencing parametric exercises for an operating system course. IFIP Artificial Intelligence Applications and Innovations, 450-458. Socha, K., Knowles, J., & Sampels, M. (2002). A MAX-MIN ant system for the university timetabling problem. In Third International Workshop on Ant Algorithms (ANTS 2002) (pp. 1-13). Tattersall, C., Manderveld, J., van den Berg, B., van Es, R., Janssen, J., & Koper, R. (2005). Self organising wayfinding support for lifelong learners. Education and Information Technologies, 10(1-2).
Tattersall, C., Van den Berg, B., Van Es, R., Janssen, J., Manderveld, J., & Koper, R. (2004). Swarm-based adaptation: Wayfinding support for lifelong learners. In P. de Bra & W. Nejdl (Eds.), 3rd Conference on Adaptive Hypermedia (pp. 336-339). Van den Berg, B., van Es, R., Tattersall. C., Janssen, J., Manderveld, J., Brouns, F., et al. (2005). Swarm-based sequencing recommendations in e-learning. Intelligent Systems Design and Applications, 3(3), 1-11. Valigiani, G., Jamont, Y., Biojout, R., Lutton, E., Fonlupt, C., & Collet P. (2006). Man-hill optimization of pedagogical paths in an e-learning system. Special issue on artificial evolution. In P. Collet (Ed.), Techniques et Sciences Informatiques. Hermes. Valigiani, G., Lutton, E., Jamont, Y., Biojout, R., & Collet, P. (2006). Automatic rating process to audit a man-hill. WSEAS Transactions on Advances in Engineering Education, 1(3), 1-7. Valigiani, G., Lutton, E., & Collet P. (2006). Adapting the ELO rating system to competing sub-populations in a man-hill. In 13th ISPE International Conference on Concurrent Engineering (pp. 766-774). IOS Press. Valigiani, G., Lutton, E., Jamont, Y., Biojout, R., & P. Collet (2005). Evaluating a real-size man-hill. In WSEAS International Conference on E-Activities. Zlochin, M., & Dorigo, M. (2002). Model-based search for combinatorial optimization: A comparative study. In 7th International Conference on Parallel Problem Solving from Nature (pp. 651-661).
Chapter XII
E-Learning 2.0:
The Learning Community Luisa M. Regueras University of Valladolid, Spain Elena Verdú University of Valladolid, Spain María A. Pérez University of Valladolid, Spain Juan Pablo de Castro University of Valladolid, Spain María J. Verdú University of Valladolid, Spain
ABstrAct Nowadays, most of electronic applications, including e-learning, are based on the Internet and the Web. As the Web advances, applications should progress in accordance with it. People in the Internet world have started to talk about Web 2.0. This chapter discusses how the concepts of Web 2.0 can be transferred to e-learning. First, the new trends of the Web (Web 2.0) are introduced and the Web 2.0 technologies are reviewed. Then, it is analysed how Web 2.0 can be transferred and applied to the learning process, in terms of methodologies and tools, and taking into account different scenarios and roles. Next, some good practices and recommendations for E-Learning 2.0 are described. Finally, we present our opinion, conclusions, and proposals about the future trends driving the market.
Copyright © 2008, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
E-Learning 2.0
IntroductIon Immersed in a tremendous crisis, the evolution of the Web seemed to have collapsed. In a context where the business crashed and most future expectations were about to be frustrated, the Web was redefined as Web 2.0 bringing back many believers who were previously deceived. For some it was a “web quake,” for others simply a new word with an old meaning. But, what is unquestionable is that it has contributed to a relaunch of the Web evolution, renewing the influence of the Web in our society and affecting most areas of the citizens’ life. In this chapter, we will present some examples of how Web 2.0 affects different aspects of our lives. Social software or collaboration are key elements of the Web 2.0, but is their technology strictly new or is it simply a new approach based on collaboration and participation? Technology will also have a place in this chapter. We will try to discover if the technology is new or simply dressed up or reused. After examining society and technology we will focus on education: Does E-Learning 2.0 exist? What is it exactly? Has it reached education centres? We will discuss new challenges and opportunities and show how Web 2.0 can be transferred to classrooms. A discussion about methodologies, tools, scenarios, and roles for E-Learning 2.0 will also be part of this chapter. Finally, some recommendations and guidelines for good practices will be provided and, to conclude, we will present our opinion and proposals about the future trends forced by the market and the environment. In summary, the objectives of this chapter are the following: • •
To make readers aware of the new trends of the Web (Web 2.0) To review Web 2.0 technologies
•
• •
To analyse how Web 2.0 can be transferred and applied to the learning process, in terms of methodologies and tools, and taking into account different scenarios and roles To present good practices and recommendations for E-Learning 2.0 To present our opinion, conclusions, and proposals about the future trends driving the market
revoLutIon In the weB: weB 2.0 Historically there have been some “killer apps” in the Internet world. First, there was the e-mail, which is the most used application in the Internet. Second, there was the World Wide Web and browsers, which have been established as the dominant software platform for clients in the network. Recently, a new kind of services has come on stage earning the right to create a new denomination: Web 2.0. According to Wikipedia, the term Web 2.0 refers to a second generation of services available on the Web that lets people collaborate and share information online. As O’Reilly (2005) explains: The concept of Web 2.0 began with a conference brainstorming session between O’Reilly Media and MediaLive International. Dale Dougherty, Web pioneer and vice-president of O’Reilly Media, noted that far from having crashed, the Web was more important than ever, with exciting new applications and sites popping up with surprising regularity. In their discussion, they found that the companies that had survived the collapse seemed to have some things in common, and they thought that the dot-com collapse could have marked some kind of turning point for the Web. Then, they agreed that a call to action might make sense and, therefore, the Web 2.0 was born.
E-Learning 2.0
Table 1. Different services implementation. Web 1.0 vs. Web 2.0 (Adapted from O’Reilly, 2005) Web 1.0
Web 2.0
Platform
Netscape, Explorer…
Google services, AJAX
Web pages
Personal Web sites
Blogs
Word processor
Microsoft Word
Google Docs (Writely)
Portals
Content Management Systems
Wikis
Encyclopedia
Britannica Online
Wikipedia
Knowledge
Directories (taxonomy)
Tagging (folksonomy)
References
URLs
Syndication, RSS
Lookup
Domain name speculation
Search engine optimization
Role
Publishing
Participation, collaboration
Media provision
Netmeeting
Skype
Content
Akamai (Content delivery)
BitTorrent (P2P)
Metrics
Page views
Cost per click
Table 1 shows many of the differences between the two approaches. The most popular service of the new Web generation is known as blog (from Binnacle Log). The blog is a Web site periodically updated that chronologically compiles contents of one or several authors where the most recent appears first. Contents may be texts, pictures, audio,or video and the style is usually personal and nonformal. Other good representative of these new services is the wiki (from the Hawaiian word wiki meaning fast), which is a type of Web site that allows visitors themselves to easily add, remove or otherwise edit and change some available content, sometimes without the need to register. This easiness of interaction and operation makes a wiki an effective tool for collaborative authoring. Perhaps the best-known use of a wiki is the Web-based encyclopedia, Wikipedia (http://www. wikipedia.org), which is the maximum exponent of collaborative effort in the Internet. At this stage, there are proofs that 2006 has been the year of Web 2.0, such as the success of social communication sites (like LinkedIn, MySpace, YouTube or Facebook), where the social technology (like blogs or wikis) have spread like wildfire. However, when most users are tasting
the success of Web 2.0 or “social Web,” a lot of people are getting ready for the boom of the next generation of the Internet: Web 3.0 or the “semantic Web.” As a mainstream tool, the Web is still in its infancy and there is plenty of experimentation still happening as people grapple with how to use it (Evans, 2006).
key techonologies for web 2.0 With Web 2.0 being more a philosophy of service than a new system, it may be hard to talk about the technology that runs it. In fact, every known technology effectively supports the services of Web 2.0. Actually, the current services that everyone would classify as Web 2.0 run on old platforms and well proven technologies for Web 1.0. There exists a handful of technologies that support the majority of services in use today or envisioned for the next years, like HTTP as application communication protocol, XML and HTML as content representation, and Java, ASP, and PHP as scripting languages for the server side. These are also well known general-purpose technologies that have been fuelling the overall Internet industry for several decades so that they do not contribute specially to Web 2.0.
E-Learning 2.0
Really, the new way of using these technologies is what makes Web 2.0 a new real and cool concept that deserves an own name. From an architectural point of view, some techniques developed as standards have to be remarked. Some of them are Web services (with related technologies like REST, SOAP, UDDI, and so forth), syndication “protocols” such as RSS and ATOM, AJAX as a new technique to freshening up the world of Web user interfaces, and peer to peer networks and ubiquitous networking with presence indication (SIP). All these concepts will be reviewed in the following sections.
Web Services in a Brief In the past, many services have been developed from scratch by voluntaries or professionals once and again. Most of the research on computing has been focused on reusing and integrating previous efforts. This leads to a vast variety of standards, frameworks, libraries, Application Programming Interfaces (APIs), and component architectures that conform a world of islands of standardized computing technologies. Many bridges, gateways, tunnels, and connectors have, of course, been built between them, but the result consists of a very heterogeneous technology background. In order to make modern Web 2.0 systems work, many sources of information and computing resources should be orchestrated to compose a powerful and popular service. This should be accomplished by means of a common channel of communication and method invocation protocols. In the past, many vendors have tried to standardize their own version of distributed method invocation technologies such as DCOM, RMI, or CORBA. However, although some of them are very popular, none of them has achieved the degree of universal standard. Recently, a set of standards proposed by the World Wide Web Consortium have been proposed as a respected and powerful common infrastructure of communication between independent
computing services. These are named collectively as Web Services. The key to success is that these services mainly rely on standardized XML documents and HTTP transport, which are available in most computing platforms. Hence, this approach can work in many environments with less infrastructure changes than with older frameworks. Web services are described by means of an XML document known as Web service description language (WSDL) and made public in a registry named universal description, discovery and integration (UDDI). The messages involved in the communication are XML documents with a schema known as simple object access protocol (SOAP). All of these components make up an infrastructure to bind independent and remote systems and configure a complex and more powerful service. For example, a small Web service could borrow the powerful search engine and mapping tools from Google (http://code.google.com), mix them with the latest headlines of world news from BBC and some local content with almost no cost.
new Look for the Browser: AJAx Simultaneously to the development of the previous technological framework, the world of user interfaces has evolved too. Web applications have early adopted a different model of programming from the old Windows-based applications, called representational state transfer (REST). In REST, the flow of the application is encoded in the documents exchanged from the server to the client instead of being stored as persistent objects at any of the sides. This can be observed in every Web page, where the logic of the application is represented in the content itself, mainly in the form of hyperlinks. During a decade, almost every mass service on the Web relied on this architecture in order to relieve the servers’ load and allow more requests to be served.
E-Learning 2.0
Figure 1. Differences between traditional Web applications (upper figure) and AJAX (lower figure). Note the difference in servers load and the increase of interactivity.
Recently, some new patterns of Web applications have risen: Asynchronous Javascript and XML (AJAX). There seems to be a back step to the age of heavy-weight client applications but with a Web browser as innovative running environment. Root technologies for AJAX are XML as format for communication, XHTML and DHTML as graphical user interface toolkit, and a full evolved Javascript support at browser level. With a careful development of Javascript logic in the documents, a full-interactive and fastresponse application could be deployed without installing any additional software at the client (see Figure 1). The result is a Web page that shows a cool aspect and reacts smoothly. The irritating reloads of pages and waiting times are avoided and the updates on the screen happen alike they are expected in traditional client applications.
syndicating and mashing As the success of a service relies mainly on the quality of content, several projects have failed
due to the cost of acquiring such information in a timely fashion. On the other hand, many information providers can not obtain enough social repercussion in a monopolized mass media market. Moreover, the advent of new community driven services, such as blogs, has increased the offer of information and has made social impact more difficult. For providers it is worth sharing the content with popular services, and for the later it is worth feeding from them. This process of mutual benefit is generally called Syndication. In Web 2.0, users build spontaneous networks and communities (blogosphere) and content is selected and promoted by means of an organiclike mechanism of selection and propagation. Interesting sources of information are cited and linked in more sites and their global ranking is calculated by measuring the interest of the community on them. In order to allow such mechanism, a technical solution should be used. This involves mainly a standard language of publication: RSS and ATOM are the most popular. They are ways of
E-Learning 2.0
representing information that is accessed by means of classical protocols, hence, they are more languages than protocols, but, as they include state information and instructions for the access and retrieval of information, they are usually known as protocols. Nowadays, Web service providers are using RSS/ATOM feeds as lightweight alternatives to SOAP. Developers are finding new ways to combine Web services from different sites into new applications, known as “mash-ups” in the lingo of Web 2.0. RSS states for Really Simple Syndication, and, indeed, it is. RSS is a very simple document that provides users with a title, an abstract and a link to the information. To overcome the limitations of such a simple solution, a more flexible and powerful format have appeared with the name of ATOM.
wInds oF chAnge BLowIng the weB: weB 2.0 And the InFormAtIon socIety Web 2.0 affects all aspects of life, from the private and fun sphere to work environments, from an individual level to a group level. In short, Web 2.0 becomes an enabler to (Prinz, 2006): • • • • •
Turn an Internet presence into a customer experience site Turn a passive consumer into a creative content provider Enforce the link between Internet users and Internet places Support the self-organisation of communities Create a social Web on top of the technical Web
other networks in the network Although the Web is the most successful and representative service in the Internet, we should not forget other interesting and very promising networked services that are also earning great acceptance and expectation. Peer to peer networks are frequently proposed as the alternative to a centralized and easy-to-control network. These services rely on the computing resources of every participant, instead of relying on the giant servers of commercial providers. This approach allows supporting services with the resources provided by the users themselves. There are no scalability problems as newcomers contribute with new resources to the network. Some examples of this approach are file sharing, in which bandwidth and storage are shared (eDonkey, eMule or BitTorrent), computing grid projects, in which spare computing time is shared (SETI at home or BOINC), and Internet telephony, in which bandwidth and geographical proximity are shared (Skype).
Currently, we can access lots of information and, at the same time, write our own content or leave comments. We are now the content generators. Education, marketing, journalism, and even politics is moving towards the reality 2.0. Marketing 2.0 is about getting integrated into the message “constellation” (Thorson, 2006). Web 2.0 gives companies the opportunity to reach thousands of customers at a much lower cost. Companies need to understand that their customers are their best marketers; a positive comment about a product or service is still more powerful than any commercials or print ads. Until now, those comments took place at cocktail parties, where very few people knew about them. However, now the place is the blogosphere, where everybody can read and comment about them. In this sense, Polo (2006) gives her opinion about the power of blogs: “imagine a database in which you can search lots of conversations about your products, with good and bad opinions... terrifying, isn’t it?” Another interesting example is journalism,
E-Learning 2.0
where the blog is also being extensively used as a tool to allow a more direct and quick communication with readers, especially in the cases where journalists cover the news in the exact place where they happen as, for example, war reporters. Moreover, some citizens are already using the blogosphere as political space. The blog must not be a simple electoral propaganda element. Currently, politicians use blogs in order to talk about general subjects, transmit their own ideas, explain their activity, inform about their responsibility areas, listen to and receive citizens’ opinions, and share knowledge. On the other hand, Web 2.0 facilitates a change in the way people interact online, so that a lot of daily activities are being affected, even some that have not surrendered so far, changing habits strongly established. An interesting example is the use of the blog as a public personal diary.
e-LeArnIng 2.0 reAches cLAssrooms How can the concepts of Web 2.0 be transferred to e-learning? This is something that the e-learning community is wondering at the moment. As well as Web 2.0 is not a completely defined context but a constantly developing environment instead, so is E-Learning 2.0. Regarding collaboration and information sharing, Web 2.0 gives users an experience that is richer and more dynamic than the traditional static Web pages, which were generated by individuals or closed groups. In fact, one of the major components of the Web 2.0 movement is the social software. The idea dates as far back as the 1960s and JCR Licklider’s thoughts on using networked computing to connect people in order to boost their knowledge and learning ability (Alexander, 2006). Social software enables users to rendezvous, connect or collaborate through computer-mediated communication and to form online communities.
Broadly conceived, Web-based social software could encompass older media, but some would restrict its meaning to more recent phenomena such as blogs and wikis (as defined in Wikipedia, http://en.wikipedia.org/wiki/Social_software). Social revolution is being even more important and extraordinary than the technological revolution. Maybe the most visible aspect of this revolution is the use of tools such as blogs or wikis that are contributing to change from a Read Web to the Read/Write Web that Tim Berners-Lee, the creator of the HTML, envisioned. In the context of education, this trend will transform the way we have taught so far. Learners will not only learn, but they will also interact and share knowledge since they count with tools that will allow them to do it easily. The process of producing Web contents and, therefore, e-learning materials, can be started and developed almost without any type of technical knowledge and without an excessive time investment. This fact makes the launching of e-learning experiences an easier task and lets the teacher to remain a teacher and does not oblige him to become an expert in information and communication technologies (ICT). According to Wikipedia, the momentum in the area of e-learning is based on the confluence of several important trends: •
•
• • • • •
Dramatically lower effort to compose elearning solutions based on Web 2.0 technologies and tools Demand in corporate settings for training that requires time and engages learners in the process over a course of time Recognition in e-learning of the importance of blended learning The trend toward student centred design The theory of connectivism (Siemens, 2005) Free-Libre Open Source Software (FLOSS) and open access Educational blogging
E-Learning 2.0
Putting all this together, we could talk of ELearning 2.0 as the learning community that uses active ICT-based collaboration and communication as the main base of the learning process. According to O’Reilly (2005), “there is still a huge amount of disagreement about just what Web 2.0 means, with some people decrying it as a meaningless marketing buzzword, and others accepting it as the new conventional wisdom.” In our opinion, Web 2.0 is not really new, but it is a new term successfully used to refer to the recent changes experimented by the Web that, with more than 237 million citations in Google in October of 2006, has clearly taken hold. A similar discussion could take place about E-Learning 2.0: is it really a different scenario for e-learning? There are very critical opinions. For example Jennings (2005) states that E-Learning 2.0 is a rhetorical manoeuvre by e-learning suppliers and consultants to distance themselves from the failures of the first wave of e-learning, or the bastard neologism offspring of e-learning and Web 2.0 technologies. Regarding the definition of E-Learning 2.0, an interesting discussion has taken place in the EdNA (Australia) groups on social networking: philosophy and pedagogy. In this context, FitzGerald (2006) discusses whether Web 2.0 is matched with E-Learning 2.0: •
•
•
0
E-Learning 1.0 was static packaged content developed by content developers such as CDROMs and courseware. It had little true interactivity and learner input and very little (if any) contact with a tutor. E-Learning 1.5 is best represented by Learner Management Systems (LMS). Some packaged content and some provided by the teacher. There is more interaction with a teacher and some with peers (through forums and chat). E-Learning 2.0 will follow a student-centred model and will be centred on the personal
learning environments (PLE) using social software. Students generate and share content. They interact not only with teachers and their peers, but with anyone in the world they can learn from. Elsewhere FitzGerald (2006) states: The irony of all this is that we are returning to the way things were before modern schools were invented. It used to be that learning occurred within, and supported by, the community. After schools were developed students were removed from the community into the mainly artificial environment of the classroom where they were expected to learn content divorced from context. As Downes says, “there is nothing more virtual than the classroom! Just think how students learn foreign languages and consider how teachers try to recreate authentic France in their classrooms” (Blamire, 2006). E-learning as we know it has been around for 10 years or so. During that time, it has emerged from being a radical idea—the effectiveness of which was yet to be proven—to something that is widely regarded as mainstream. It is the core to numerous business plans and a service offered by most colleges and universities. And now, elearning is evolving with the World Wide Web as a whole and it is changing to a degree significant enough to warrant a new name: E-Learning 2.0 (Downes, 2005b).
mEThoDoLoGiES, TooLS, anD chALLenges In e-LeArnIng 2.0 Once the technology and tools are available, the main emphasis has to be made on the methodology to be used in order to design and implement the new learning strategies and procedures, taking into account the scenarios and roles of the different agents that take part in the learning
E-Learning 2.0
process. Along this section, some classifications for the e-learning key elements are established and commented. According to the students’ social interdependence and interaction in the classroom, learning methods can be classified in the following three types: •
•
•
Individualistic learning, which means “working by oneself to ensure that one’s own learning meets a present criterion independently from the efforts of the other students” (Johnson & Johnson, 1999). Cooperative learning, which is the instructional use of small groups so that students work together to maximize their own and each other’s learning. (Johnson, Johnson, & Holubec 1998). Students are rewarded on the basis of the success of the group (Woolfolk, 2001). Competitive learning, which is mainly based on activities carried out by students that compete individually or in teams. Students work against each other to achieve a good grade and only some of them succeed.
Cooperative learning may be contrasted with competitive learning and individualistic learning. In addition, within competitive situations, individuals seek outcomes that are beneficial to themselves and detrimental to others. The student effort is on performing faster and better than classmates. (Johnson & Johnson, 1999). However, team competitions could combine the better of competitive learning with the better of the collaborative one. On the other hand, there are some “social” tools that are very useful for e-learning: •
Wikis provide unique collaborative opportunities for education. Combining freely accessible information, rapid feedback, simplified HTML, and access by multiple editors, wikis are being rapidly adopted as
•
•
•
•
•
an innovative way of constructing knowledge. (Edu)Blogs are increasingly finding a home in education. Blogs remove the technical barriers to writing and publishing online, encourage students to keep a record of their thinking over time and facilitate critical feedback, by letting readers add comments—which could be from teachers, peers, or a wider audience. Flickr is a free photo sharing site which provides to teachers and students an easy way to upload, share, and add notes to the photos on the Web. Google Docs has quickly jumped into the educational field. It is an easy-to-use online word processor, so that students can access from anywhere with an Internet connection and work collaboratively. Synchronous communication tools (such as videoconferencing or chat) allow online communication in real time and can be helpful in assisting group work and peer learning among students. Asynchronous communication tools (such as e-mail or newsgroups) allow time-independent interaction between participants and are well suited to cooperative learning strategies.
Forums, blogs, wikis, flickr, and so forth, can all be created to surround your course with an expanded set of learner resources. But, it is a need to be very careful to ensure that you understand who you teach, that is, the learner’s scenario: primary school, secondary school, university education, long-life learning, or informal learning. Finally, the role of teachers and students should change. •
Teacher: from leader to facilitator. The teacher’s role should go from absolutely controlling and leading everything that happens in the classroom to staying aside and accompanying students along their walk.
E-Learning 2.0
•
Student: from passive to active. Students can rely on their teachers and class sessions when learning something or evaluating, making decisions, and being responsible for their own learning.
In this context, E-Learning 2.0 is mainly oriented to collaborative (or mixed competitivecollaborative) methodologies applied in each and every scenario where the teacher tends to be a facilitator and the student is encouraged to be as active as possible. In addition, relatively new tools, such as blogs, have already found their place in the e-learning framework: the edublogs. Examples of edublogs can be found in Web sites such as Weblogs of teachers (http://www. superblog.org/planet/educacion), Aulablog (http:// www.aulablog.com) or Edublogs (http://www. edublogs.org). The collaborative aspect of blogs is what has brought many teachers into the fold. Commenting capabilities in many of the blogging software packages provide students and teachers with an easy peer review tool and make easier bringing in experts from outside the classroom. In this respect, an interesting experience is the one developed by Will Richardson (pioneer educational blogger). In 2002, his school had adopted Sue Monk Kidd’s book, The Secret Life of Bees, as part of the literature curriculum, and he decided to set up a blog for students to converse about the book outside class. With great delight, he watched the students post ideas as well as artistic interpretations. He also posted a blog for parents to join the discussion. The result was a truly democratic learning space (O’Hear, 2006). It is also important to underline that E-Learning 2.0 raise new challenges or barriers. So, for example, digital literacy should be extended to cover the new tools (such as blogs, wikis, and so forth), what will be essential for the definitive launching of E-Learning 2.0. There are a lot of people who still have to learn there is life on the Web beyond e-mail and e-commerce. First of
all, it is necessary to know and learn the use and features of Web 2.0 tools. This is a prerequisite in every scenario and for every participant, independently of the teachers’ and students’ role and of the methodology. Digital literacy must cover every tool to be used, including of course, synchronous and asynchronous communication tools, but specially focusing on the new tools and their possibilities. Moreover, it is necessary that students and teachers are familiar and comfortable with collaborative writing and the use of these new tools. The social and cultural practices of collaborative working are not in the students’ repertoire of shared practices yet. So, for example, Grant (2006) implemented a 3-week wiki writing segment in her class of 13- to 15-year-olds and found that her students had great difficulties writing in a public space and altering other student’s wiki work. The second important challenge has its origin in the anarchy and uncontrolled character of the Internet. E-Learning 2.0 encourages socialization, networking, participation, and collaboration, but how can we be sure that learners are out of all dangers that can be found in the net? These dangers can be simply distractions when students waste time in activities that do not contribute to the learning objectives, but they can also be really harmful as digital predators know how to virtually capture their preys. This challenge is especially important in primary and secondary education and informal learning where the student’s family and environment must take a special care of the paths the teenager walks. Finally, another aspect is that these new tools are being used to do the same. In this sense, blogs may seem to be replacing the classic Webs that teachers used as a notice board or as a space to deliver materials for their students. At the same time, the efforts to motivate students to create their own blogs obey mainly to the classic patterns of learning practices: the teacher usually suggest the content, periodicity of updating, number of posts, style, type, and number of links, and so
E-Learning 2.0
forth, and, of course, establishes the criteria to evaluate the work done and its contribution to the final assessment of the student. The result is that new tools and technologies surrender to the predominant learning models and practices. In this way, a false taste of innovation is acquired. The real innovation would only exist if new technologies and tools are provided with new learning approaches. The simple use of new tools does not guarantee by itself a change in the e-learning scenario or in results. An important challenge is to define and design new approaches and strategies. It is here where the E-Learning 2.0 concept is valuable and makes sense: the discussion about E-Learning 2.0 may be the reason to slow down and redefine e-learning with a more constructivist and collaborative base.
Wikis are already making their mark in higher education and are being applied to just about any imaginable task. They are popping up like mushrooms, being faithful to the word wiki ( fast), at colleges and universities around the world, sometimes in impromptu ways and more often with thoughtful intent. Bellow, different ways to use Wiki in education are shown: •
Best PrActIces Every learning experience based on constant communication and collaboration through ICT and where participants are active members of an online learning community could be considered as a contribution to E-Learning 2.0. But which are the best practices? For students, a blog can be used as a living record of their learning: a place to pose questions, publish work in progress, or provide links to (and comments on) relevant Web resources. Teachers may want to start their own subject-based blog where they can provide up-to-date information and comments on their subject area, as well as posting questions and assignments, and linking to relevant news stories and Web sites (O’Hear, 2005). As with blogs, wiki software makes it possible to publish a Web site with very little technical knowledge, putting a greater emphasis on collaborative rather than on personal publishing. Every wiki entry has an “edit this page” button so that users can not only add new content but make changes to existing pages.
•
•
•
Easily create simple Web sites. Typically when students are asked to create Web sites as part of a class project, they have to rely on the chance that someone in a group knows how to make a Web site, or that some sort of training is available. The wiki allows students to spend more time developing the content of the site, instead of trying to learn how to make one. As more organizations adopt the wiki for collaboration and information sharing, students will be well prepared to use it in their careers (Pearce, 2006). Course Web site. Eukaryotic Genetics and Molecular Biology course at the University of Maryland, Baltimore County (UMBC) uses a wiki as course Web site. Moreover, the Career Services unit at the University of British Columbia is using wiki pages to store and organize content for a major site redevelopment presenting job postings and career education (Brian, 2004). Project development with peer review. Students can use a wiki to develop a term paper, and might start by tracking their background research. Taking advantage of the automatic revision history, the teacher and peers can see the evolution of the paper over time, and continually comment on it, rather than offering comments only on the final draft (Pearce, 2006). Group authoring. Using a wiki “pulls” group members together to build and edit the document, which strengthens the community within the group. It also provides
E-Learning 2.0
-
•
•
•
immediate, equal access to the most recent version of the document for all group members (Pearce, 2006). Track a group project. Considering students’ busy schedules, a wiki is very useful for tracking and completing group projects. It allows group members to track their research and ideas from anywhere with Internet access, helps them save time by seeing what sources others have already checked, and gives them a central place to collectively prepare the final product. According to Pearce (2006), one way to do this is to give each group a wiki page in which to write a paper, and give each member of the group a separate page to track his/her research and ideas for the paper. The “paper” page lets you see how the group is working collaboratively, and individual pages let you track how each group member is developing contributions to the paper, and gives you a place to leave feedback and suggestions for each student. Tracking progress in your research group. Teachers can use the wiki in order to maintain a journal of work performed on group projects. Information repositories or teacher-librarians. For example, the SUNY Geneseo Collaborative Writing Project at the State University of New York that consists in a Dictionary of Literary Terms (Schacht, 2006). Review and debate classes and teachers. Students at Brown University have started the Course Advisor Wiki, a place for students to collaboratively write reviews of courses they have taken. This wiki gives reviewers flexibility to articulate their impressions, while readers get richer reviews that combine multiple impressions and perspectives. Rhetoric and Composition course at Penn State University uses a wiki for students to
•
•
•
tell about their experiences during the class and leave advice for the next group. Presentations. Some people are using a wiki instead of conventional presentation software, like Keynote and PowerPoint. Social interaction. Wikis have been used successfully to enable hundreds of students to participate in a collaborative icebreaker exercise at Deakin University. This project illustrates how e-learning practitioners can use wiki technology to enhance social interaction amongst students online (Augar, Raitman, & Zhou, 2004). Planning a conference. An academic research unit at the University of British Columbia used a wiki for planning a technoculture conference, in order to collect supporting resources and gather contributions from invited participants. They used the wiki during the conference to record group work. Participants subsequently edited their collaborative authorings, resulting in a “conference proceedings” of an altogether different sort.
Moreover, when attempting to develop studentcreated wikis at the school, McPherson (2006) recommends starting with easy-to-manage wiki projects. For example, a possible wiki writing project for primary-aged children is to create an animal alphabet wiki. Individuals, pairs, or groups of primary students can (a) choose an animal, (b) select a picture of that animal from the Internet and, (c) insert the picture and the first letter of the animal into their wiki. Intermediate students can use a wiki to create a story with multiple beginnings and endings or, as another example, develop their online and off-line map-reading and writing skills by collaboratively adding descriptive text to an online map, such as WikiMapia (www.wikimapia.org). Secondary students can use a wiki to create hyperlinks from an existing poem to pages containing their own understandings or they can even create their own collaborative poem.
E-Learning 2.0
Finally, it is important to show that E-Learning 2.0 is a lot more than blogs, wikis, and so forth. Teachers can create their own tools based on constant communication and collaboration, so that they can also contribute to define E-Learning 2.0. As an example, we have developed the QUEST system (Verdú, Regueras, Verdú, Pérez, & de Castro, 2006). This tool presents both individual and collaborative environments in which intellectual challenges are proposed by teachers or students, as in a competition, which must be solved in a time-constrained way. The scores obtained make up a league-like classification from which the final grade is taken.
recommendAtIons And Future trends In our opinion, e-learning is constantly evolving, and, currently, E-Learning 2.0 should be regarded as the target scenario. However, it is difficult to identify the future trends in E-Learning 2.0, as ELearning 2.0 itself must be considered a trend. As O’Hear (2005) explains, the early promise of e-learning—that of empowerment—has not been fully realized. For many people the experience of e-learning has been no more than a hand-out published online, coupled with a simple multiple-choice quiz. However by using Web 2.0 tools, e-learning has the potential to become more personal, social and flexible. Traditional e-learning approaches have been based on cumbersome (and often expensive) virtual learning environments, which tend to be structured around courses, timetables, and testing. In contrast, E-Learning 2.0 (as coined by Stephen Downes) takes a “small pieces, loosely joined” approach that combines the use of discrete but complementary tools and Web services—such as blogs, wikis, and other social software—to support the creation of ad-hoc learning communities (O’Hear, 2006). According to these ideas, the future trends and targets of e-learning, that is, E-Learning
2.0, include customization, flexibility, and networking. Customization can be defined as the capacity to adapt to the situation of each individual and not the other way around. Customization, in terms of tools, methodologies, and contents, is an important element. The learning experience must be personalized and tailored to the characteristics of each scenario. There is also a need for an increased focus on the fulfilment of customer needs. It makes sense to take personality differences into account when designing an education system (Schank, 2002). In this sense, Freund and Piotrowski (2003) states that the use of e-learning and the mass customization concept will help to make a given person fit for a given job or a given job fit for a given person and overcome the efficiency paradox in developing and delivering education. An interesting idea is the appearance of services similar to “eBay” but in the context of online learning where users can have access exactly to what they need. Do not make the mistake of thinking that by combining different online pages from different courses you can produce a “mash-up” course. More likely, you will only produce a “ransom note” course. A good trainer will look at the different technologies available and mash up a solution that will be effective for their learner base (Rosen, 2006). Flexibility closely linked to the prior trend refers to the possibility of offering multiple elearning experiences (methodologies, workloads, objectives, and so forth) in order to guarantee the personalization or customization for each learner leading to the universalizing of e-learning. E-Learning 2.0 must enable learning in every scenario to allow the integration of the educational life of students. In the new paradigm, e-learning should be individualized, localized, and globalized with aims to create unlimited opportunities for life long learning. Students are the centre of educa-
E-Learning 2.0
tion. Learning should be facilitated to meet their needs and personal characteristics, and develop their potentials. Students can be self-motivated and learn by themselves with appropriate guidance and facilitation, and the learning process becomes self-actualizing, discovering, experiencing, and reflecting (Cheng, 2002). Networking is linked to the idea of socialization and the possibility of getting advantage of any synergy that may arise in the net, including reutilization of materials, ideas, strategies, methodologies, and so forth, that, this way, are easily made available to everyone. Participants establish virtual identities and do networking with other participants. While still in early stages of development, technology is permitting new ways of accessing information and interacting. Rapid knowledge growth requires off-loading the internal act of cognition, sense, and meaning making, and filtering to a network consisting of human and technology nodes. Thus, E-Learning 2.0 can be regarded as a distributed process within a network recognizing and interpreting patterns, according to the connectivism theory. As Downes (2005a) explained in his presentation at the Transitions in Advanced Learning Conference about “What E-Learning 2.0 Means To You,” the learning is a network phenomenon where a Web of user-generated content is an important pillar. The network is open, diverse (multiple views, technologies, etc.) and connected and interactive (not integrated), made of small pieces, loosely joined. E-Learning 2.0 is about enabling a social experience that recognizes the course is but one social-organizational group in a broader education environment. E-Learning 2.0 is about moving beyond the course towards a more holistic conception of a networked learning environment. One consequence of this shift is a hunger by educators to conduct research and benchmark various elearning strategies and programs using data from peer institutions (Blackboard, 2006).
concLusIon After some hard beginnings, e-learning seems to be definitively catching a massive attention as the application of ICT is finally regarded as positive by the learning community. We could go even further and say that the institutions or teachers that do not count with this new element are doomed to become obsolete. On the other hand, the pressure of the learner demand for learning experiences that make use of the ICT in classroom activities, blended learning, or distance learning is each time stronger and stronger. In our opinion, one cycle is completed and we are now starting a second cycle that deals not with e-learning but with E-Learning 2.0. At the moment, there exists a lukewarm welcome to E-Learning 2.0, but, as time goes by, we predict that the learner demand will grow as quick as the interest among institutions and teachers, who should be constantly evolving. With such a prediction, our recommendation is to anticipate to needs and trends, be a pioneer now that you are still on time, and contribute to decide which must be the next step in e-learning. Do not surrender to the trends if you do not think that they are the best possible practices, but dictate them instead. E-Learning 2.0 is about personalizing the elearning environment to be more disciplined and pedagogically specific to the educational activity at hand (Blackboard, 2006). This more tailored platform experience must be achieved through specialized software extensions developed by and for educators, as well as with rich, interactive digital resources. One important conclusion is that the emphasis is and should be more focused on methodologies than on technologies, tools, platforms, or applications. Do not make the mistake of thinking that wikis, blogs, or forums are the E-Learning 2.0. They are resources. E-Learning 2.0 needs structure and instructional design to be effective and provide return on investment. Courses 2.0 should
E-Learning 2.0
never be a hotchpotch assembly of old methodologies delivered through new technologies. A new tool is only effective if it provides you with a better service (Rosen, 2006). The evolution of the Web does not have to do so much with the technology as one might think. As Downes (2005b) says, “The emergence of the Web 2.0 is not a technological revolution, it is a social revolution.” Finally, in accordance to Anderson (2006), “stop thinking about e-learning 2.0 as a new toolset… It is so not about that! The E-Learning 2.0 is about people. The tools simply allow us to do what we do best… and that is connecting with other people to support, share and learn with each other.”
Future reseArch dIrectIons Having a look around schools and universities all over the world, we can observe that, although there are some pilot experiences, E-Learning 2.0 is not a widely extended reality. In this sense, the design and development of educational strategies that include the use of E-Learning 2.0 tools, such as wikis, blogs, or quests, should be considered a future research direction. However, in this closing section, we would like to go further; so, beyond what we consider future fields of research, we would suggest working on the following ideas. Masie (2006) defined himself as a nanolearner. What does that mean? As he explained himself, each day, he learned several things in small chunks, really small chunks. A 90-second conversation with an expert could trigger a huge “a-ha.” A few moments concentrating on learning how something works could lead to a new micro-skill. Moreover, his opinion is that most people acquire most of their knowledge in smaller pieces. Masie pointed out this idea focusing on continuous training, but, in our opinion, it could be transferred to formal education.
We consider that people, along the different stages of life (childhood, adolescence, youth, and so forth), learn in what we could consider their global or macro-environment, made up by different micro-environments: home, family, school, university, recreational activities, centres, and so forth. Instead of living isolated experiences, people should live a global experience. Hence, it is necessary that the different micro-environments are interconnected from a technological point of view through synchronous or asynchronous telematic tools, and which is more important, that a global educational strategy is established. However, the reality is that it is very difficult, if not impossible, to find global learning strategies that can be successfully applied to such different contexts. Moreover, this global learning experience requires an instant multiterminal access, as different environments are characterized by different terminals. Otherwise, the learning process could be slowed down if the occasion to live certain micro-learning-experiences is lost due to the lack of instant access through the terminal available at each time and place. Other element to be further studied and promoted is the collaborative generation of contents. Sites like wikipedia, helpfulvideo.com, and so forth, allow the collaborative publishing of contents in different formats. However, some problems appear: how to supervise the quality of this generated by-everyone content without making authors to feel controlled or restricted. In this sense, future research should focus on developing adequate educational strategies to solve this. To sum up, in our opinion, future research should focus on developing and validating effective learning strategies in a context where it is considered that learners live a macrolearningexperience by adding microlearning-experiences that take place in different micro-environments, where instant access from a wide range of terminals is a requirement and multiformat contents are also generated by learners.
E-Learning 2.0
reFerences Alexander, B. (2006). Web 2.0: A new wave of innovation for teaching and learning? EDUCAUSE Review, 41(2), 32-44. Anderson, G. (2006). E-learning 2.0 is about people. Konferenz Professionelles Wissensmanagement - Erfahrungen und Visionen Live von der ICL 2006». Retrieved October 25, 2007, from http://elearningblog.tugraz.at/archives/130 Augar, N., Raitman, R., & Zhou, W. (2004). Teaching and learning online with wikis. Paper presented at the ASCILITE Australasian Society for Computers in Learning in Tertiary Education 2004 Conference. Perth, WA. Blackboard. (2006). Blackboard unveils blackboard beyond initiative. Four bold inaugural projects will advance e-learning 2.0 vision. Retrieved October 25, 2007, from http://www.blackboard. com/company/press/release.aspx?id=823603 Blamire, R. (2006). Insight blog. The online diary of European schoolnet’s insight team. Retrieved October 25, 2007, from http://blog.eun.org/insightblog/2006/06/elearning_20.html Brian, L. (2004) Taking a walk on the wiki side. Campus Technology. Retrieved October 25, 2007, from http://www.campustechnology.com/article. asp?id=9200 Cheng, Y. C. (2002). Linkage between innovative management and student-centred approach: Platform theory for effective learning. Paper presented at the Second International Forum on Education Reform: Key Factors in Effective Implementation, Bangkok, Thailand. Downes, S. (2005a). What e-learning 2.0 means to you. Paper presented at the meeting of the Transitions in Advanced Learning Conference, Ottawa. Downes, S. (2005b). E-learning 2.0. eLearn Magazine. Retrieved October 25, 2007, from
http://elearnmag.org/subpage.cfm?section=artic les&article=29-1 Evans, M. (2006). Goodbye Web 2.0, long live Web 3.0. Retrieved October 25, 2007, from http://evans.blogware.com/blog/_archives/2006/11/12/2493546.html FitzGerald, S. (2006, June 8). Social networking: Philosophy and pedagogy. 2006 Networks Community Forum. Edna, Australia. Retrieved October 25, 2007, from http://www.groups.edna. edu.au/mod/forum/discuss.php?d=6615 Freund, R. J., & Piotrowski, M. (2003). Mass customization and personalization in adult education and training. Paper presented at the 2nd Interdisciplinary World Congress on Mass Customization and Personalization MCPC2003, Munich, Germany. Grant, L. (2006). Using wikis in schools: A case study. Retrieved October 25, 2007, from http://www.futurelab.org.uk/research/discuss/ 05discuss01.htm Jennings, D. (2005). E-learning 2.0, whatever that is. DJ Alchemi Web, an individual brew of learning, culture and technology. Retrieved October 25, 2007, from http://alchemi.co.uk/archives/ele/elearning_20_wh.html Johnson, D., & Johnson, R. (1999). Learning together and alone: Cooperative, competitive, and individualistic learning. Boston: Allyn and Bacon. Johnson, D., Johnson, R., & Holubec, E. (1998). Cooperation in the classroom. Boston: Allyn and Bacon. Masie, E. (2006). Nano-learning: Miniaturization of design. Chief Learning Officer (CLO) Magazine. Retrieved October 25, 2007, from http://www.clomedia.com/content/templates/ clo_article.asp?articleid=1221&zoneid=173
E-Learning 2.0
McPherson, K. (2006). Wikis and student writing. ProQuest Information and Learning. Teacher Librarian. Retrieved October 25, 2007, from http://redorbit.com/news/education/761377/wikis_and_student_writing/index.html O’Hear, S. (2005). Seconds out, round two. The Guardian. Retrieved October 25, 2007, from http://education.guardian.co.uk/elearning/story/0,10577,1642281,00.html O’Hear, S. (2006). E-learning 2.0 - how Web technologies are shaping education. In R. MacManus (Ed.). Retrieved October 25, 2007, from http://www.readwriteweb.com/archives/e-learning_20.php O’Reilly, T. (2005). What Is Web 2.0. design patterns and business models for the next generation of software. O’Reilly Web. Retrieved October 25, 2007, from http://www.oreillynet. com/pub/a/oreilly/tim/news/2005/09/30/whatis-web-20.html Pearce, J. (2006) Using wiki in education. The Science of Spectroscopy. Retrieved October 25, 2007, from http://www.scienceofspectroscopy. info/edit/index.php?title=Using_wiki_in_education Polo, F. (2006). Marketing 2.0 New way to old things. Jornada Internet de Nueva Generación: Web 2.0, Internet 2.0. Spain. Retrieved October 25, 2007, from http://internetng.dit.upm.es/ponencias-jing/2006/polo.pdf Prinz, W. (2006). Social Web applications. Jornada Internet de Nueva Generación: Web 2.0, Internet 2.0. Spain. Retrieved October 25, 2007, from http://internetng.dit.upm.es/ponenciasjing/2006/prinz.pdf Rosen, A. (2006). Technology trends: E-learning 2.0. Learning Solutions E-Magazine. Retrieved October 25, 2007, from http://www.readygo. com/e-learning-2.0.pdf
Schacht, P. (2006). The collaborative writing project. Retrieved October 25, 2007, from http:// node51.cit.geneseo.edu/WIKKI_TEST/mediawiki/index.php/Main_Page Shank, R. C. (2002). Designing world-class elearning. McGraw Hill. Siemens, G. (2005). Connectivism: A learning theory for the digital age. Retrieved October 25, 2007, from http://www.elearnspace.org/Articles/ connectivism.htm Thorson, D. (2006). Marketing 2.0: The constellation. Retrieved October 25, 2007, from http:// donthorson.typepad.com/don_thorson/2006/04/ the_constellati.html Verdú, E., Regueras, L. M., Verdú, M. J., Pérez, M. A., & de Castro, J. P. (2006). Improving the higher education through technology-based active methodologies: A case study. WSEAS Transactions on Advances in Engineering Education, 3(7), 649-656. Woolfolk, A. (2001). Educational psychology. Boston: Allyn and Bacon.
AddItIonAL reAdIngs Alexander, B. (2006). Web 2.0: A new wave of innovation for teaching and learning? EDUCAUSE Review, 41(2), 32-44. Augar, N., Raitman, R., & Zhou, W. (2004). Teaching and learning online with wikis. Paper presented at the ASCILITE Australasian Society for Computers in Learning in Tertiary Education 2004 Conference. Perth, WA. Cheng, Y. C. (2002). Linkage between innovative management and student-centred approach: Platform theory for effective learning. Paper presented at the Second International Forum on Education Reform: Key Factors in Effective Implementation, Bangkok, Thailand.
E-Learning 2.0
Downes, S. (2005a). What e-learning 2.0 means to you. Paper presented at the meeting of the Transitions in Advanced Learning Conference, Ottawa. Downes, S. (2005b). E-learning 2.0. eLearn Magazine. Retrieved October 25, 2007, from http://elearnmag.org/subpage.cfm?section=artic les&article=29-1 Fraser, J. (2005, April 21). It’s a whole new Internet. Adaptive Path. Retrieved October 25, 2007, from http://www.adaptivepath.com/publications/essays/archives/000430.php Freund, R. J., & Piotrowski, M. (2003). Mass customization and personalization in adult education and training. Paper presented at the 2nd Interdisciplinary World Congress on Mass Customization and Personalization MCPC2003, Munich, Germany. Grant, L. (2006). Using wikis in schools: A case study. Retrieved October 25, 2007, from http://www.futurelab.org.uk/research/discuss/ 05discuss01.htm Jennings, D. (2005). E-learning 2.0, whatever that is. DJ Alchemi Web, an individual brew of learning, culture and technology. Retrieved October 25, 2007, from http://alchemi.co.uk/archives/ele/elearning_20_wh.html Johnson, D., Johnson, R., & Holubec, E. (1998). Cooperation in the classroom. Boston: Allyn and Bacon. Johnson, D., & Johnson, R. (1999). Learning together and alone: Cooperative, competitive, and individualistic learning. Boston: Allyn and Bacon. O’Hear, S. (2005). Seconds out, round two. The Guardian. Retrieved October 25, 2007, from http://education.guardian.co.uk/elearning/story/0,10577,1642281,00.html
0
O’Hear, S. (2006). E-learning 2.0 - how Web technologies are shaping education. In R. MacManus (Ed.). Retrieved October 25, 2007, from http://www.readwriteweb.com/archives/e-learning_20.php O’Reilly, T. (2005). What Is Web 2.0. Design patterns and business models for the next generation of software. O’Reilly Web. Retrieved October 25, 2007, from http://www.oreillynet. com/pub/a/oreilly/tim/news/2005/09/30/whatis-web-20.html Masie, E. (2006). Nano-learning: Miniaturization of design. Chief Learning Officer (CLO) Magazine. Retrieved October 25, 2007, from http://www.clomedia.com/content/templates/ clo_article.asp?articleid=1221&zoneid=173 McPherson, K. (2006) Wikis and student writing. ProQuest Information and Learning. Teacher Librarian. Retrieved October 25, 2007, from http://redorbit.com/news/education/761377/wikis_and_student_writing/index.html Pearce, J. (2006) Using wiki in education. The Science of Spectroscopy. Retrieved October 25, 2007, from http://www.scienceofspectroscopy. info/edit/index.php?title=Using_wiki_in_education Polo, F. (2006). Marketing 2.0 new way to old things. Jornada Internet de Nueva Generación: Web 2.0, Internet 2.0. Spain. Retrieved October 25, 2007, from http://internetng.dit.upm.es/ponencias-jing/2006/polo.pdf Prinz, W. (2006). Social Web applications. Jornada Internet de Nueva Generación: Web 2.0, Internet 2.0. Spain. Retrieved October 25, 2007, from http://internetng.dit.upm.es/ponenciasjing/2006/prinz.pdf Richardson, W. (2006). Blogs, wikis, podcasts, and other powerful Web tools for classrooms. Thousand Oaks, CA: Corwin Press.
E-Learning 2.0
Rosen, A. (2006). Technology trends: E-Larning 2.0. Learning Solutions e-Magazine. Retrieved October 25, 2007, from http://www.readygo. com/e-learning-2.0.pdf Shank, R. C. (2002). Designing world-class elearning. New York: McGraw Hill. Siemens, G. (2005). Connectivism: A learning theory for the digital age. Retrieved October 25, 2007, from http://www.elearnspace.org/Articles/ connectivism.htm
Verdú, E., Regueras, L. M., Verdú, M. J., Pérez, M. A., & de Castro, J. P. (2006). Improving the higher education through technology-based active methodologies: A case study. WSEAS Transactions on Advances in Engineering Education, 3(7), 649-656. Woolfolk, A. (2001). Educational psychology. Boston: Allyn and Bacon.
Chapter XIII
Telematic Environments and Competition-Based Methodologies: An Approach to Active Learning Elena Verdú University of Valladolid, Spain Luisa M. Regueras University of Valladolid, Spain María J. Verdú University of Valladolid, Spain Juan Pablo de Castro University of Valladolid, Spain María A. Pérez University of Valladolid, Spain
ABstrAct This chapter provides an overview of technology-based competitive active learning. It discusses competitive and collaborative learning and analyzes how adequate the different strategies are for different individual learning styles. First of all, some classifications of learning styles are reviewed. Then, the chapter discusses competitive and collaborative strategies as active learning methodologies and analyzes their effects on students’ outcomes and feelings, according to their learning styles. Next, it shows how networking technology can mitigate the possible negative aspects. All the discussion is supported by significant study cases from the literature. Finally, an innovative system for active competitive and collaborative learning is presented as an example of a telematic versatile learning system.
Copyright © 2008, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
Telematic Environments and Competition-Based Methodologies
IntroductIon New active learning methodologies are arising as more effective learning methods than the traditional ones. The effectiveness of a learning process is achieved when its results are lasting and transferable to other situations. Various studies have shown an important improvement in long term retention of what is learnt when active learning techniques are introduced into the learning process (Canós & Mauri, 2005; Timmerman & Lingard, 2003). For example, Hyland (2002) shows that people are able to remember about 20% of what they listen to (passive), 70% of what they say (active), and 90% of what they say and do (active). The use of active methodologies contributes to develop in students the capacity to actively research and undertake prominent roles and leadership in their own learning process, facing the resolution of problems with their own resources. Active learning is not a new concept, although the interest in applying it has increased a lot over the last few years. This increase has become more noticeable with the spread of information and communication technology (ICT). ICT opens up numerous possibilities with regard to the implementation of active methodologies. It permits the development of remote cooperative activities, where the teacher’s role can be easily adapted to the model in which students must actively lead their learning process (Bryndum & Montes, 2005). When applying ICT-based active learning methods, the teacher should understand the educative value of the telematic tools and know how to use the technology in order to improve the learning process. In this sense, it is commonly believed that technology is effective when it comes accompanied by a constructivist pedagogy, which supports the learning based on investigation. Wirsig (2002) agrees about it and, moreover, she states that technology can add a considerable cognitive value to the learning process.
It also has to be taken into account that the implementation of an education model based on active methodologies is not an easy task, as a number of difficulties usually arise. Some of these difficulties are: the rejection of new methods by both students and teachers; the number of students in each class; and even current classroom set-up, which appears more suited to taking notes rather than doing group work (Verdú, Regueras, Verdú, Pérez, & de Castro, 2006a). On the other hand, the results obtained when applying active learning techniques have not always been positive. These techniques are more effective in order to achieve some objectives, whereas traditional classes are better for some others (McCarthy & Anderson, 2000). Besides, students have different learning styles and, consequently, not every student benefits the same from active learning. For example, Zywno and Waalen (2002) compare the effect of individual learning styles on student outcomes in conventional environments with the same in hypermedia assisted environments. When applying hypermedia systems, they find better results with students who prefer active, sensing, and global learning. In any case, they state that, due to the multimodal attributes involved, hypermedia is more effective in reaching all types of students and reducing differences in the academic performance among several learning styles. Therefore, we can assert that the selection of a learning-teaching strategy entails previously not only determining the cognitive activity of the learning, that is, the type of skills, competencies, and techniques to be developed (Fandos & González, 2005), but also takes into account the individual learning styles of students. The strategy implemented in a certain learning process may not be the right one for every student. One of the key aspects for the success of a learning methodology is motivation. The recommendations about learning strategies normally include techniques to encourage students. Nowadays we have more resources than ever to design
Telematic Environments and Competition-Based Methodologies
strategies with high possibilities with regard to motivation. Technology-based materials, such as hypermedia and interactive contents, promote a divergent thought, stimulate the research instead of imposing certain thoughts, and contribute to an autonomous learning. Students are encouraged to create problems and select sources and analyze them with a critical view (Bryndum & Montes, 2005). Moreover, the multiple possibilities of ICT allow us to design strategies adaptable to a wider range of learning styles. Technology-enhanced learning has generated a lot of experiences on computer supported collaborative learning (CSCL). Motivation is one of the most positive aspects of collaborative work. However, some students feel more motivated through competition. Team competition has a dual nature: it is both competitive and collaborative and, therefore, offers a lot of possibilities when facing a heterogeneous group of students. Although some authors do not recommend competitive learning (Brightman, 2006; Johnson & Johnson, 1988), we can also find studies in which good results are obtained when applying this type of learning with the support of ICT (Chang, Yang, Yu, & Chan, 2003; Philpot, Hall, Hubing, & Flori, 2005; Titcomb, Foote, & Carpenter, 1994). Some of these case studies will be reviewed along this chapter. In summary, the objectives of this chapter are: • •
•
To review different classifications of learning styles that can be found in the literature. To study how ICT can promote active learning and the improvement of the learning process, through the review of some study cases. To discuss about competitive and collaborative strategies as active learning methodologies and analyze their effects on students’ outcomes and feelings, according to their learning styles, with the support of some examples.
•
To present, as an example of a telematic versatile learning system, Quest Environment for Self-managed Training (QUEST), an innovative tool for active, competitive, and collaborative learning (Verdú, Regueras, Verdú, Pérez, & de Castro, 2006b; Verdú, de Castro, Pérez, Verdú, & Regueras, 2006).
LeArnIng styLes: cLAssIFIcAtIons Numerous categorizations of learning styles are to be found in the literature. One of the most frequent classifications (Burd & Buchaman, 2004; Canós & Mauri, 2005; Marqués, 2001) is established by Kolb, who identifies four learning styles according to the way information is received (from concrete experiences to abstract concepts) and how it is processed (from active experimentation to reflective observation): diverger, converger, assimilator, and accommodator. Divergers rely on a combination of concrete experience and reflective observation. They like working in group in order to compile information and often prefer to observe rather than to participate. Convergers prefer high levels of abstract conceptualization together with active experimentation. They seek to learn via the direct application of ideas, problem solving or deductive decision-making, without taking into account social and personal relationships. Assimilators, as the name suggests, prefer to combine abstract conceptualization with reflective observation and are good at assimilating a lot of information and arranging it in a more logical form. Their greatest strength lies in their ability to develop theoretical models. Finally, accommodators combine concrete experience with active experimentation and like working in group. Their greatest strength lies in doing things, carrying out plans and experiments, and involving themselves in new experiences. Other categorizations take into account the channels through which information arrives. For
Telematic Environments and Competition-Based Methodologies
example, Mehlenbacher, Miller, Covington, and Larsen (2000) classify students according to the categorization of learning styles developed by Felder. They distinguish active from reflective students, visual from verbal students, sensing from intuitive students, and sequential from global students. Active students prefer to process information through engagement in physical activity, discussion, and in groups, whereas reflective students tend to work alone. Visual students base their work on pictures and graphics, whereas verbal students prefer written and spoken words. Sensing students tend to work with visual and sound aids, whereas intuitive students prefer memories and ideas. Finally, sequential students like doing logical incremental steps, whereas global students prefer total picture reasoning. In another line, Kim and Sonnenwald (2002) use the scale of learning preferences of Owens and Barnes to identify three learning styles: cooperative, competitive, and individualized. The cooperative learning style indicates a preference for achieving individual goals while working in group. The competitive learning style indicates a preference for learning in competition with others, often achieving individual goals. Lastly, the individualized learning style indicates a preference for achieving individual goals having no involvement with other students. Finally, some authors briefly modify traditional learning styles models in order to adapt them to new technology based systems. For example, Brown, Cristea, Stewart, and Brailsford (2005) extend the Curry’s “onion” model for adaptive hypermedia systems. They integrate prior knowledge layer as an additional layer to those included in the “onion” model: instructional preference, social interaction, information processing style, prior knowledge, and personality style. The innermost layer, cognitive processing style, seeks to measure an individual’s personality, specifically related to how they prefer to acquire and integrate information. Moving outwards, the next layer measures information processing style and
examines a learner’s intellectual approach to assimilation of new information. The layer beyond examines social interaction and how students prefer to interact with each other. The outermost layer, of instructional preference, tends to relate to external factors such as physiological and environmental stimuli associated with learning activities. The outermost layers are more influenced by external factors (and more observable) whereas the innermost layers are considered to be more stable psychological constructs and less susceptible to change; however these are much less easily measured. Traditional learning methodologies have been oriented for reflective students and individualized learning. Nowadays, there is a greater interest in applying learning methods suitable for different learning styles. In this sense, the multiple possibilities that ICT offers allow us to adapt the used methodologies to a bigger range of learning styles.
Ict-BAsed ActIve LeArnIng There are different types of telematic tools and educational material. Most of them are currently based on Web and data base technologies: •
•
•
• •
Tools for the management of courses and students: enrolment, student’s data record, student’s achievement record, and so forth. Tools for online lectures: slides, videos, videoconferences, electronic blackboards, and so forth. Material for the support to the classes: exercises, self-assessments, interactive tutorials, virtual encyclopaedias, multimedia books, hypertext references, and so forth. Virtual laboratories: animations, simulations, study cases, and so forth. Communication tools: electronic mail, chat, discussion forums, instant messages ser-
Telematic Environments and Competition-Based Methodologies
•
vice, queries boxes, distribution lists, news boards, news groups, multiconferences, and so forth. Tools for collaborative learning: coordination of group work, virtual spaces for sharing information and resources, management of document versions, and so forth.
Nowadays, these tools are not used alone but integrated into learning management systems (LMS), which permits the scheduling, implementation, and management of the whole learning process. Most of the current available LMS platforms (WebCT, Blackboard, Angel, Centra, Moodle, Claroline, and so forth) include many of those common tools. They are used in different active learning contexts, since they facilitate the interactions between the teacher and the student, among students and between the student and the course material. These interactions are very important components of active learning (Mehlenbacher et al., 2000). As well as management, communication, and interaction tools, interactive contents, such as online interactive exercises, can be very efficient when used as instruments for active learning. In fact, whereas most of the available e-learning material consists of static hypertext pages, at best with Flash animations, the current learning theory suggests that the student’s achievement improves more when the educational resources are more interactive and multimedia (Morozov, Tanakov, Gerasimov, Bystrov, & Cvirco, 2004). However, it has to be taken into account that interactivity is a critical design objective of the educational Web sites, since it requires a hardworking process. Whereas to develop static or quasi-static hypertext pages is cheap and easy, the design and implementation of interactive material takes a lot of time and is a complex and expensive task. Active methodologies based on the use of ICT promote active learning and permit cooperative
activities. This fact has generated a lot of experiences in what has been called computer supported collaborative learning (CSCL), many times as the opposite of traditional competitive learning. However, in our opinion, there is not opposition between collaboration and competition since both techniques can be used in a complementary way. Team competition could be a good example of it.
coLLABorAtIve And comPetItIve LeArnIng Collaborative work or group work is the work that a group of people do with the aim of getting a common objective. Researches in group techniques suggest that group work improves the way of perceiving obstacles and determines the group as an element of support and motivation to face up to learning (Fandos & González, 2005). Although sometimes it is used as synonym of collaborative learning, cooperative learning puts the emphasis more on the product that is obtained during the group learning process. Besides, the scheduling and guidance of the teacher has a more important role. However, in spite of that slight nuance of meaning, both types of learning are different to the traditional one in the same things (Marqués, 2001): • • • • • • •
They centre on the student. There is an intrinsic motivation. They are focused on knowledge-building. The responsibility of the learning falls especially on the student. There is a greater motivation. The development of higher-order reasoning is promoted. More abilities are developed, research, group work, problem solving, public presentations, social abilities, prevention and mediation in conflicts, and so forth.
Telematic Environments and Competition-Based Methodologies
In spite of these positive aspects, student satisfaction in courses requiring a collaborative effort among peers is heavily influenced by the project team experience (Reichlmayr, 2005). The development of ICT has enabled the creation of tools which make collaborative work easier (Fandos & González, 2005): •
• •
•
Communication tools: The electronic mail for the exchange of information among the members of a group, the discussion space or forum for sharing ideas, and the chat or the videoconference. Organizational tools: Schedule, notice board, and so forth. Tools for the presentation of ideas: The electronic blackboard and the remote desktop applications are good examples of this type of resources. Tools for sharing and managing documents: These tools allow the members of the workgroup to access documents located in shared remote work spaces, while the system automatically manages the different versions of the documents.
Some examples of use of these tools in CSCL are described below. Van der Linde (2005) uses the collaborative work platform basic support for cooperative work (BSCW) as a complement and support to the traditional class based sessions. The use of BSCW was intended to increase the level of discussion and knowledge by sharing information outside the classroom. Van der Linde (2005) observed that, although students uploaded online contents and participated in debates created by the teacher, there was only limited interaction within and between groups. The author thinks that this limited use of the BSCW system may be explained by the lack of experience in the use of virtual tools. Students cited time pressures as the reasons for their low participating rates in online debates. They also mentioned traditional learning and the
strong contact culture within students as reasons why online debates were not used for interactive communication among collaborative groups, since they preferred to debate, if possible, in a face-toface manner. However, only 11% of the class felt that the BSCW platform did not help them in their learning process whereas only 39% perceived that the use of the platform created cohesion among the members of the class, owing to the reasons described above. These results make sense in a hybrid learning context like the one described. Although the BSCW system was not a key element in group discussions, which were mainly achieved face-to-face, other of its characteristics certainly facilitated the learning process. The Intelligent and Cooperative Systems Research Group of the University of Valladolid uses also BSCW in experiences of project-based learning. In some of their studies they state that the computational support to share documents through BSCW has been revealed as important, even in a face-to-face classroom setting (Martínez, Gómez, Dimitriadis, Jorrín, Rubia, & Vega, 2005). Frees and Kessler (2004) use a group of tools for collaborative learning called Cimel (Cimel Collaborative Tools). The system consists of a contact list, a chat client, a searchable database, and a set of screen sharing applications collectively named ShowMe. ShowMe applications promote a “learn by doing” approach that enables an instructor to view the student’s desktop and use annotations and text messages in order to guide students in their learning process. Students have complete control over which parts of the screen they want to share with the teacher and the system captures the local user’s mouse cursor position and stream it to the peer, so the teacher can see where the student is pointing. Thus, the authors try to promote active learning, forcing students to perform the appropriate actions in order to solve the problem, while allowing the instructor to oversee what is going on, make suggestions and guide students to the solution. The creators of this tool observed that
Telematic Environments and Competition-Based Methodologies
student feedback was positive, with most students indicating that it would be very useful to them if made available in all their classes. Mickle, Shuman, and Spring (2004) present other system, used for active and collaborative learning, called computer augmented support for collaborative authoring and document editing (CASCADE). This system permits to insert comments, diagrams and graphical representations into shared documents and handles automatically when and who made the comment. In that way, the CASCADE system makes it possible for the teacher to review the comments made by students over multiple big documents in a matter of seconds, via automatically provided dynamic hypertext representations. Although it is easier to find successful experiences in CSCL or pure collaborative learning, in part because of the fact that they are more numerous, in the next section, we briefly describe some studies in which ICT-based competitive learning has been successfully applied. Chu, Chang, and Hsia (2004) study the competitive behaviour of some students that compete with each other in designing a Web site. They find that, during the competition, when two teams are level on points and have a chance to win the contest, both of them obtain higher scores than in any other situation. On the other hand, when a team distances from another one, the lower scoring team ends up giving up the struggle to win. Chang, Wang, Liang, Liu, and Chan (2004) describe, on the one hand, the use of a Web site to make online contests and, on the other hand, the wireless system EduClick to make contests within the classroom or between different classrooms located in the same or different schools. In both cases, contests lie in answering questions automatically processed by the systems. Revilla, a lecturer in the department of Applied Mathematics at the University of Valladolid, has developed a successful project named “OnlineJudge” (http://acm.uva.es). The developed system corrects the solutions to a number of online
computing challenges and allows participation in online contests. Since its launch in 1997, it has received over 5 million submissions. This fact reflects the success of this type of competitive learning projects. The idea of competition is usually linked with gaming, because of the motivational nature of both methods. There are some systems that implement games for learning, which is an effective method in order to increase motivation, fun, and learning (Philpot et al., 2005). Yu, Chang, Liu, and Chan (2002) present a study conducted to examine students’ preferences towards different kinds of competition using the online system JOYCE. The Joyce system is a competitive board game that allows students to compete against each other or against a simple computer simulated agent. Players have to respond multiple-choice questions correctly in order to win the game (Chang et al., 2003). The authors justify the fact of choosing “gaming” as the instructional method because instructional games have been suggested as a powerful technique to capture and hold student interest. Besides, they include an element of competition to further promote the motivation. In their study, students were exposed to three competition modes (Yu et al., 2002): • •
•
Anonymity mode: The identities of the participants are concealed. Face-to-face mode: Participants are sitting next to each other during competition (therefore, it is not anonymous). Distance mode: The identities of the participants are revealed, but they are geographically distant from each other while competing in the system.
Their results show that students prefer the anonymous competition because it is more stimulating and more likely to reduce the stress and other negative emotions, whereas the face-to-face competition is what they dislike most.
Telematic Environments and Competition-Based Methodologies
As Yu et al. (2002) state, some previous research showed that competition has a negative effect on interpersonal relationship, emotional states, and group process. Those studies were conducted in face-to-face situations, in which the identities of the participants cannot be hidden without networking technological support. However, the authors state that it remains to be seen whether the negative effects of face-to-face competition can be mitigated with the anonymity inherent in network competition and synchronous e-learning environments. Chang et al. (2003) evaluate the effectiveness of the Joyce system and successfully observe that most students think that playing a game is a good way to learn, since it allows them to memorize much knowledge. Besides, they state that the system increases motivation, as their studies revealed that students want to read more articles and books to find answers and to win the game. Philpot et al. (2005) develop some computerbased interactive games for learning some specific engineering subjects. Those games make use of repetition and carefully constructed levels of difficulty in order to help students to get a better performance in their learning. Students can participate in games at their own pace but in a competitive way. As in the examples described above, students’ quantitative ratings and comments to these games were very positive. Even more, students who used games scored significantly higher on quizzes than those who learned via traditional lectures. In all these examples, it has been shown the success of different learning strategies. But, how do we choose one of them?
choosIng A LeArnIng strAtegy As it has been said before, the choice of a learningteaching strategy implies several considerations. The first thing to take into account is the desired
cognitive activity, that is, the goals of learning, the type of skills or abilities to be developed, and the desired level of cognition: from the recall of information to more abstract levels such as synthesis and evaluation, following the Bloom’s taxonomy of learning. Once identified the goals and skills to be developed, it is necessary to develop strategies and activities to lead students to the desired level of cognition. Moreover, when designing the global strategy or concrete classroom activities, it should be taken into account, not only the desired cognitive activity, but also the students’ motivation. Therefore, instructors should try to adapt their learning strategies to the most common individual learning styles of their students, and then choose the ICT tools that can better support the selected strategies.
motivation In general, it can be asserted that motivation has a great influence on the learning process. It stands to reason that if students want to learn they will get more involved in the learning process, getting, then, a better performance. There are several factors that influence motivation, such as the connection of educational activities to the real world or the achievement of activities that facilitate the constructive learning. The key is that a well designed and executed learning strategy involves motivation; and this is the reason why the work of the teacher during the educational design is so important. Motivation is even more critical in a distance learning context, in which the teacher cannot interact with students in a face-to-face way. In these cases, interactivity can be used as an element of motivation in order to capture and hold the students’ interest. The teacher should provide an interactive and dynamic environment to compensate for the physical distance. Even more, if it is well designed, results could be improved.
Telematic Environments and Competition-Based Methodologies
Hislop (1999) presents some interesting data obtained when evaluating a learning experience developed completely online: 95% of students felt that they had better access to the instructor, and 43% felt that they actually communicated with the instructor more than they would in a traditional class. These data confirm the hypothesis that Web tools facilitate the interaction between the teacher and the student. However, 51% of students missed face-to-face lectures, 40% felt that they had to work harder in the online course, and 15% felt that the online class was more boring than a traditional class. These data reflect, on the one hand, the benefit of considering telematic systems as additional resources to be used taking into account the students’ profile and the educational, social, and professional context. On the other hand, they reflect the idea that online education does not work well for everyone, since the success also depends on the different learning styles.
Learning styles The interactivity provided by the Web seems to be positive, since it is a good motivation element. However, as it has been mentioned along this chapter, it must be always taken into account the different learning styles. Results from Mehlenbacher, Miller, Covington, & Larsen (2000) show that, in a Web environment, reflective and global learners are performing better than active and sequential learners. They conclude that reflective learners, who prefer solitary, quiet problem-solving as opposed to group discussion of problems, may have been more comfortable in the online courses. This result surprised them somewhat, since they assumed that their “interactive” Web site would favour active learners. However, attempting to emulate the interactivity of a face-to-face class on the Web has a high level of difficulty. It is also important to consider, as some authors highlight (Burd & Buchanan, 2004), that, even if individuals are usually strong in one learning style, in general, they will exhibit multiple
0
learning styles depending on factors such as age, personality, culture, and environment. Agreeing with this principle, we believe that not every student must be treated in the same way, and that a set of activities that contribute to facilitate the learning process of the different students must be proposed by the team of teachers. The idea is to respect diverse talents and ways of learning (Chickering & Ehrmann, 1996): students need opportunities to show their talents and learn in ways that adapt better to their learning styles. Therefore, teachers must apply different learning techniques: collaborative learning, practical sessions, self-assessments, and so forth, when designing their classes. However, we also believe that students must be adequately prepared to successfully undertake their professional careers. Then, teachers should focus on the use of active strategies where cooperative and competitive learning activities, and not only individual learning activities, take place. As it has been explained before, although motivation is one of the most positive aspects of collaborative work, some students feel more motivated through competition. Team competition has a dual nature; it is both competitive and collaborative and, therefore, offers a lot of possibilities when facing a heterogeneous group of students. This could be taken into account when selecting and designing a learning strategy. When designing a competitive learning strategy, besides the selection of individual or team competition, other factors should be analyzed, since the competitive methodology can be anonymous or of known authorship, faceto-face or distance located, and so forth. At this moment, the possibilities of networking should be considered.
use of Ict ICT is a useful tool to support pedagogical principles. Yet, for each learning strategy we must ensure that the right technology is applied.
Telematic Environments and Competition-Based Methodologies
Pedagogical principles remain and, at each time and for each situation, we must analyse the existing technology and choose the one appropriate to support the learning strategy selected for the existing scenario, as suggested by Chickering and Ehrmann (1996). First of all, communication technologies can facilitate the access to teachers by means of electronic mail or videoconference systems. Besides, teachers can provide students with documents and any interesting information of the course through a Web page, for example. A clear advantage of electronic mail, forum or instant message applications is that students can communicate with their classmates on distance, making the cooperation among them easier. Moreover, collaborative work tools allow a group of students to work on the same document, to share documents, and so forth. Besides, virtual laboratories, simulation software, interactive Web pages, and so forth, are efficient tools for active learning, as previously said. Finally, other tools enable students to know their progress in learning. For example, there are software tools to monitor the knowledge development, interactive activities that provide automatic feedback, and monitoring tools that register the activities done by students in databases. In the same way, these tools can be used by teachers to monitor student progress, so that they can take the corrective actions accordingly. All these technologies also make possible time saving in different ways. For example, students do not need to go to libraries if they have the documentation available on the Web. Besides, they allow a more efficient management of time as asynchronous communication tools free students of the limited timetables of face-to-face classes, being able to interact with their classmates and teachers at any moment. Moreover, ICT allows distance education as neither time nor space is a barrier to education (Verdú, Verdú, Regueras, & de Castro, 2005).
New technologies can also contribute to better transfer high expectations to students. For example, students may hand in better essays when they know that these essays are going to be available in a Web page. Last of all, the great variety of tools based on new technologies facilitates a greater range of learning methods. Technologies allow students to learn in the most appropriate and effective way, according to their learning styles. For example, there are tools for collaborative learning and there are also tools for lonely, reflective, and self-assessable learning. Summarizing, new technologies seem to be efficient learning support tools. However, the need to familiarize with the new learning tools as well as the possible technical problems, arise as threats to the normal learning process. Student motivation may decrease too much if the use of a new tool is complex or if the computer frequently fails. Although education technologists promote the use of interactive systems, we must be careful when designing these systems. More reflective students may not benefit from environments that provide instantaneous feedback and response; they may prefer first reading and following links until they develop a fuller representation of the entire learning space, and then acting, inputting, and writing (Mehlenbacher et al., 2000). The above-described telematic systems can be used outside or inside the classroom, in an isolated way or together with traditional resources, such as blackboards and books. Thus, against the learning carried out completely on distance through the Internet, an intermediate solution, named blended learning, arises. Blended learning is more appropriate in certain contexts, which are mentioned below in this section. It aims to join the best of face-to-face classroom with the best of online learning to promote active independent learning and reduce class seat time (Reichlmayr, 2005). For example, Reichlmayr (2005) uses blended learning in order to better take advantage of the
Telematic Environments and Competition-Based Methodologies
class face time; students are expected to prepare for class by reading relevant sections from the textbook and online resources. Lectures and in-class activities complete and consolidate this learning process, rather than being the primary source of knowledge. Besides, a chat application is used for the communication among students and with the teacher outside the classroom. In this case, most students (72%) liked this mixed “online-face to face” environment. Sonya Symons and Doug Symons (2002) indicate that technology helped them to incorporate valuable aspects of education in more ways than it is usually possible in large classes. The telematic part of the learning process lays in weekly discussion groups in order to discuss the week’s material, Web-based assignments, and activities using existing Internet resources to encourage interactive learning. Godoy (2005) uses a Web-based simulated environment as a support tool of the methodology “learning-by-doing.” At the beginning of the simulation the learner is placed in a role and learns about a problematic situation to be solved. Then, the student interacts with the case obtaining more information about it through the navigation. The options have to be chosen from a menu and the learner decides on the path to follow. Resources about previous cases and other material of general relevance to the understanding of the case are given to the student. Besides, the student can perform computer simulations and interact with other learners and with a tutor through a synchronous forum. Students can send the solution to a problem as a free format response to a tutor in order to obtain feedback. They can also choose the response from a limited set of possibilities and send it to the system, which will automatically respond with positive or negative considerations. Finally, ICT and, more specifically, networking, provide the support of anonymity, which allows teachers to design learning strategies adapted to the above-mentioned different modes:
anonymous or of known authorship. An example of the success of this possibility is shown in the following section, where we present QUEST (Quest Environment for Self-managed Training), a telematic versatile learning system for active, competitive, and collaborative learning (Verdú et al., 2006; Verdú et al., 2006b).
Quest: A mIxed comPetItIvecoLLABorAtIve soLutIon QUEST has been developed in the context of the interdisciplinary research group Intersemiotics, Translation, and New Technologies (ITNT). It is accessible from every computer with Internet access and, hence, can be used in the classroom, at home, or in a cybercafé. This is possible because QUEST has been implemented as a module that can be integrated into the e-learning platform Moodle. As a learning tool, QUEST aids the introduction of cooperative and competitive workshops supported by telematics. The system pursues the development of student inquiry, documentation, and critical analysis skills, while raising the level of involvement and communication between students and teachers. The system presents both individual and group work environments in which a set of intellectual “challenges” that must be solved in a time-constrained way are proposed to students by other students and/or by teachers and tutors. The answers to those challenges are open, with the possibility of including mathematical equations and any type of attached files. The workshop mainly focuses on competitiveness, collaboration, and social acknowledgment as motivation mechanisms and seeks to strengthen these skills through the student’s academic work. Hence, workshop sessions are presented as a contest with the corresponding ranking based on the scores obtained by the students with the answers submitted by them to the set of challenges proposed.
Telematic Environments and Competition-Based Methodologies
To enrich the learning process by means of collaboration and involvement, the system allows the students to submit challenges and to pre-assess the corresponding answers, being rewarded depending on the quality of the tasks done. Each new challenge proposed by a student must be validated by the tutor. From when the challenge is created until the end of the process, the score of a challenge varies (as shown in Figure 1): 1.
2.
3.
Stationary phase: During this phase, the score remains as proposed by the teacher during a period of time to allow students to understand and to take in the task. This period should be longer when the task is more complex and/or needs important previous documentation work. Inflationary phase: During this phase, the score grows to adjust the reward to the difficulty. It is assumed that a lack of correct answers means that the difficulty of the task proposed is higher than the reward offered at a certain time. Deflationary phase: This phase starts exactly when a challenge is correctly answered and, at this moment, the score starts decreasing. During this phase the score continuously
decreases following an exponential or a linear pattern so that the student who is the first to answer is awarded the maximum score. Some research suggest that it is very useful for students to access to the essays done by other students and, besides, they do better essays when they know that their classmates will read their work (Hislop, 1999). Thus, when the time to answer is over, the challenge is closed, and students can read all submissions from all participants anonymously, which can help them to understand and reinforce concepts. The platform has many interesting features that permit the implementation of a wide variety of learning styles: individual, collaborative, and competitive learning, as well as a combination. For example, teachers can propose challenges that must be solved individually, thus leading to individual and competitive learning. They can also propose challenges that must be solved in teams, leading to collaborative (within teams) and competitive (between different teams) learning. They can also propose challenges that must be solved by teams or individually, eliminating the awarding of a score and the time-constraint condition to solve the task, which leads to collab-
Figure 1. Variable scoring of a challenge during its life-cycle
Telematic Environments and Competition-Based Methodologies
orative or individual, but not competitive learning. As explained before, teachers must analyse the scenario in which the learning process will take place and the profiles of participants. Taking those elements as a base, they must adapt the available technological tools to make the best use. As it has been described above, students can access to their classmates’ answers anonymously, that is, the system also permits the identity of participants to be hidden, since we opted to maintain the privacy of submitting authors. In fact, in the light of the results of our experiences with QUEST (Verdú et al., 2006a), we subscribe the idea introduced above by Yu et al. (2002): the negative effects of face-to-face competition can be mitigated with the anonymity inherent in ICT.
concLusIon This chapter gives an overall view of the state of the art of active learning techniques based on ICT, analysing some pedagogical aspects that worry us, such as the student motivation. An important conclusion we can draw from this study is that, when designing and applying different learning strategies, we should take into account several important elements: the cognitive activity of learning, the individual learning styles of students, the available resources, and the educational, social, or professional context. In general, the results obtained when applying active learning based on ICT are satisfactory. ICT improves the learning process, facilitates motivation, and allows students to develop their inquiry, creativity, and critical analysis skills. However, the starting phase of the introduction of these methods is complicated, due to the rejection of new methods by both students and teachers, and the difficulties in the design of interactive contents, as well as in the definition of the learning strategy. In the last section of this chapter, the telematic system QUEST, designed to be used as a tool for
innovative active learning strategies, has been described. The system is focused on active, competitive, and collaborative learning. Many of the existing tools for competitive learning are specific, “tailor-made” for a concrete subject, totally separated from other tools. This is the reason why one of the objectives established when the tool QUEST was designed was that it should be valid for multiple disciplines and integrable into Moodle, a widely-used open-source e-learning platform. This has the advantage of incorporating all the stages of the learning process (contents, evaluation, tutorials, and so forth) into the same platform, without having to access to different environments for each stage. Unlike in other tools, in QUEST students are content generators, participating in the learning process in an active way. They can propose challenges and, moreover, they can assess the answers sent by their classmates, thus being involved in the assessment process. Besides, the answers of students are made public when challenges finish, but preserving the anonymity of the authors. This fact, with no doubt, enriches the learning of students, who can learn from the answers submitted by their classmates, and contributes to a better quality of these answers. The variable scoring system, as well as the multiple options provided when generating a contest (individual and group contests, different marking strategies, etc.), makes QUEST a highly flexible system. The results obtained when using QUEST in different courses of the University of Valladolid have confirmed that QUEST is a tool which promotes the students’ participation in class. Besides, in spite of the possible initial negative reactions to new methods, QUEST has been widely welcomed by the students, who would like this tool to be applied in other courses. The focus of the strategies followed when using QUEST brings students and teachers closer to new education methods, in which students must have a more active role, as they are not
Telematic Environments and Competition-Based Methodologies
only content receivers but also content generators. This is something that has to be promoted more and more.
Future reseArch dIrectIons Nowadays, it is not easy to find in the literature cases in which a competitive active learning strategy based on ICT is applied. Actually, we cannot say that the application of active learning strategies is a widely spread practice, as the old model in which teachers give lectures and students have passive roles still predominates. In Europe, universities are immersed in the process of convergence towards the European Higher Education Area (EHEA), and the use of active methodologies is an objective in the short term. Therefore, it is urgent to analyse the strategies that are adequate for different educational situations, according to the cognitive objectives, the individual learning styles and the context, thus providing teachers with some guidelines that allow them to apply these strategies in their classrooms. Although, as it has been shown along this chapter, there are different real cases of application of these methodologies with good results, teachers do not count with clear guidelines to decide which methodology or technology is adequate in each case. Frequently, teachers would like to use a methodology or a tool but they do not feel able to do it efficiently. We can affirm that there is a lot of work left in order to establish a set of clear guidelines to be applied in each academic situation. As it has been commented on in the chapter, there are some authors who think that competition is not a good learning strategy due to the fact that it causes stress in the student and it can even create a bad atmosphere in the classroom. Anyway, although these negative effects exist, it must be taken into account the educational level. In a university education context, the student must not only learn to work in groups (collaborative learning) but also to be competitive offering better
results than others (competitive learning), as this is also required in the labour world. ICT may be used in order to reduce negative effects as those related to a possible bad atmosphere. There are not many studies related to this, so it remains to be seen whether the negative effects of competition can be mitigated with the anonymity that networking technologies provide. ICT provide the possibility of simultaneously applying different learning strategies through the use of different telematic tools within the classroom. As students with different learning styles coexist in a classroom and the learning strategies must be in accordance with the learning styles of students, the results of applying different learning strategies within a classroom should be analysed. In this sense, several questions arise such as whether we should use a unique strategy for all the students of a classroom, adapted to the most common learning style of the students, or use different strategies for the different groups of students. Also, it should be analysed how this would affect the results of students as well as the development of course programmes. Besides, there are other ways of active participation of students such as peer review that has advantages in terms of reduction of the workload of the teacher. However, at the same time, teachers partially lose control over the process, and this is the reason why it can not be applied in any educational context. In this sense, it would also be important to analyse these techniques comparatively with competitive strategies.
reFerences Brightman, H. J. (2006). GSU master teacher program: On critical thinking. Retrieved October 26, 2007, from http://www2.gsu.edu/~dschjb/wwwcrit.html Brown, E., Cristea, A., Stewart, C., & Brailsford, T. (2005). Patterns in authoring of adaptive educa-
Telematic Environments and Competition-Based Methodologies
tional hypermedia: A taxonomy of learning styles. Educational Technology & Society, 8(3), 77-90. Retrieved October 26, 2007, from http://www. ifets.info/journals/8_3/8.pdf Bryndum, S., & Montes, J. A. (2005). La motivación en los entornos telemáticos. RED Revista de Educación a Distancia, V(13). Retrieved October 26, 2007, from http://www.um.es/ead/red/13/ Burd, B. A., & Buchanan, L. E. (2004). Teaching the teachers: Teaching and learning online. Reference Services Review, 32(4), 404-412. Canós, L., & Mauri, J. J. (2005). Metodologías Activas para la Docencia y Aplicación de las Nuevas Tecnologías: una Experiencia. In URSI 2005. Retrieved October 26, 2007, from http:// w3.iec.csic.es/ursi/articulos_gandia_2005/articulos/otros_articulos/462.pdf Chang, L. J., Yang, J. C., Yu, F. Y., & Chan, T. W. (2003). Development and evaluation of multiple competitive activities in a synchronous quiz game system. Journal of Innovations in Education and Training International. 40(1), 16-26. Chang, S. -B., Wang, H. -Y., Liang, J. -K., Liu, T. -C., & Chan, T. W. (2004). A contest event in the connected classroom using wireless handheld devices. In J. Roschelle, T.-W. Chan, Kinshuk, & S. J. H. Yang (Eds.), Proceedings of the 2nd IEEE International Workshop on Wireless and Mobile Technologies in Education (WMTE 2004) (pp. 207-208). Los Alamitos, CA: IEEE Computer Society. Chickering, A. W., & Ehrmann, S. C. (1996, October). Implementing the seven principles: Technology as lever. American Association for Higher Education Bulletin, 49(2), 3-6. Chu, K., Chang, M., & Hsia, Y. (2004). Stimulating students to learn with accuracy counter based on competitive learning. In Proceedings of the IEEE International Conference on Advanced Learning Technologies (ICALT’04) (pp. 786-788). IEEE Computer Society.
Fandos, M., & González, A. P. (2005). Estrategias de Aprendizaje ante las Nuevas Posibilidades Educativas de las TIC. In A. Méndez-Vilas, B. Gonzalez, J. Mesa, & J. A. Mesa (Eds.), Proceedings of the Third International Conference on Multimedia and Information & Communication Technologies in Education (pp. 7-10). Cáceres, Spain: Formatex. Frees, S., & Kessler, G. D. (2004). Developing collaborative tools to promote communication and active learning in academia. In Proceedings of the 34th Annual Conference Frontiers in Education (FIE’04) (Vol. 3, pp. S3B/20 - S3B/25). Piscataway, NJ: IEEE. Godoy, L. A. (2005). Learning-by-doing in a Webbased simulated environment. In Proceedings of the 6th International Conference on Information Technology Based Higher Education and Training (ITHET 2005) (pp. F4C/7 - F4C/10). Piscataway, NJ: IEEE. Hislop, G. W. (1999). Anytime, anyplace learning in an online graduate professional degree program. Group Decision and Negotiation, 8, 385-390. Hyland, B. (2002). Cone of learning. From the course “Train the trainer”. Iowa Center for Public Health Preparedness. Retrieved October 26, 2007, from http://www.public-health.uiowa.edu/icphp/ ed_training/ttt/archive/2002/2002_course_materials/Cone_of_Learning.pdf Johnson, R., & Johnson, D. W. (1998). Cooperative learning. Two heads learn better than one. Transforming Education, 18, 34. Kim, S., & Sonnenwald, D. H. (2002). Investigating the relationship between learning style preferences and teaching collaboration skills and technology: An exploratory study. In E. Toms (Ed.), Proceedings of the American Society of Information Science & Technology Annual Conference (pp. 64-73). Medford, NJ: Information Today.
Telematic Environments and Competition-Based Methodologies
Marqués, P. (2001). Didáctica. Los Procesos de Enseñanza y Aprendizaje. La motivación. Retrieved October 26, 2007, from http://dewey.uab. es/pmarques/actodid.htm Martínez, A., Gómez, E., Dimitriadis, Y., Jorrín, I. M., Rubia, B., & Vega, G. (2005). Multiple case studies to enhance project-based learning in a computer architecture course. IEEE Transactions on Education, 48(3), 482-489. McCarthy, J. P., & Anderson, L. (2000). Active learning techniques vs. traditional teaching styles: Two experiments from history and political science. Innovative Higher Education, 24(4), 279-294. Mehlenbacher, B., Miller, C. R., Covington, D., & Larsen, J. S. (2000). Active and interactive learning online: A comparison of Web-based and conventional writing classes. IEEE Transactions on Professional Communication, 43(2), 166-184. Mickle, M. H., Shuman, L., & Spring, M. (2004). Active learning courses on the cutting edge of technology. In Proceedings of the 34th Annual Conference Frontiers in Education (FIE’04) (Vol. 1, pp. T2F/19 – T2F/23). Piscataway, NJ: IEEE. Morozov, M., Tanakov, A., Gerasimov, A., Bystrov, D., & Cvirco, E. (2004). Virtual chemistry laboratory for school education. In Proceedings of the IEEE International Conference on Advanced Learning Technologies (ICALT’04) (pp. 605-608). IEEE Computer Society. Philpot, T. A., Hall, R. H., Hubing, N., & Flori, R. E. (2005). Using games to teach statics calculation procedures: Application and assessment. Computer Applications in Engineering Education, 13(3), 222-232. Reichlmayr, T. (2005). Enhancing the student project team experience with blended learning techniques. In Proceedings of the 35th Annual Conference Frontiers in Education (FIE’05) (pp. T4F/6- T4F/11). Piscataway, NJ: IEEE.
Symons, S., & Symons, D. (2002). Using the Inter- and Intranet in a university introductory psychology course to promote active learning. In Proceedings of the International Conference on Computers in Education (ICCE’02) (Vol. 2, pp. 844- 845). IEEE Computer Society. Timmerman, B., & Lingard, R. (2003). Assessment of active learning with upper division computer science students. In Proceedings of the 33rd Annual Conference Frontiers in Education (FIE’03) (Vol. 3, pp. S1D/7 - S1D/12). Piscataway, NJ: IEEE. Titcomb, S. L., Foote, R. M., & Carpenter, H. J. (2004). A model for a successful high school engineering design competition. In Proceedings of the 34th Annual Conference Frontiers in Education (FIE’04) (Vol. 1, pp. 138-141). Piscataway, NJ: IEEE. Van der Linde, G. (2005). The perception of business students at PUCMM of the use of collaborative learning using the BSCW as a tool. In Proceedings of the 6th International Conference on Information Technology Based Higher Education and Training (ITHET 2005) (pp. F2D/10- F2D/15). Piscataway, NJ: IEEE. Verdú, M. J., de Castro, J. P., Pérez, M. A., Verdú, E., & Regueras, L. M. (2006). Application of TIC-based active methodologies in the framework of the new model of university education: The educational interaction system QUEST. In F. J. García, J. Lozano & F. Lamamie de Clairac (Eds.), CEUR Workshop Proceedings, Virtual Campus 2006 Postproceedings. Selected and Extended Papers (Vol. 186, pp. 33-40). CEUR-WS.org. Retrieved October 26, 2007, from http://CEURWS.org/Vol-186/ Verdú, E., Regueras, L. M., Verdú, M. J., Pérez, M. A., & de Castro, J. P. (2006a). Improving the higher education through technology-based active methodologies: A case study. WSEAS Transactions on Advances in Engineering Education, 3(7), 649-656.
Telematic Environments and Competition-Based Methodologies
Verdú, E., Regueras, L. M., Verdú, M. J., Pérez, M. A., & de Castro, J. P. (2006b). QUEST: A contestbased approach to technology-enhanced active learning in higher education. In S. Impedovo, D. Kalpic & Z. Stjepanovic (Eds.), Proceedings of 6th WSEAS International Conference on Distance Learning and Wb Engineering (DIWEB ‘06) (pp. 10-15). Wisconsin: WSEAS. Verdú, E., Verdú, M. J., Regueras, L. M., & de Castro, J. P. (2005). Intercultural and multilingual e-learning to bridge the digital divide. Lecture Notes in Computer Sciences, 3597, 260-269. Wirsig, S. (2002). ¿Cuál es el lugar de la tecnología en la educación? Retrieved October 26, 2007, from http://www.educoas.com/Portal/xbak2/temporario1/latitud/Wirsig_Tic_en_Educacion.doc Yu, F. Y., Chang L. J., Liu, Y. H., & Chan, T. W. (2002). Learning preferences towards computerised competitive modes. Journal of ComputerAssisted Learning, 18(3), 341-350. Zywno, M. S., & Waalen, J. K. (2002). The effect of individual learning styles on student outcomes in technology-enabled education. Global Journal of Engineering Education, 6(1), 35-44.
AddItIonAL reAdIngs Brown, E., Cristea, A., Stewart, C., & Brailsford, T. (2005). Patterns in authoring of adaptive educational hypermedia: A taxonomy of learning styles. Educational Technology & Society, 8(3), 77-90. Retrieved October 26, 2007, from http://www. ifets.info/journals/8_3/8.pdf Chang, S. -B., Wang, H. -Y., Liang, J. -K., Liu, T. -C., & Chan, T. W. (2004). A contest event in the connected classroom using wireless handheld devices. In J. Roschelle, T.-W. Chan, Kinshuk & S. J. H. Yang (Eds.), Proceedings of the 2nd IEEE International Workshop on Wireless and Mobile Technologies in Education (WMTE 2004) (pp.
207-208). Los Alamitos, CA: IEEE Computer Society. Chang, L. J., Yang, J. C., Yu, F. Y., & Chan, T. W. (2003). Development and evaluation of multiple competitive activities in a synchronous quiz game system. Journal of Innovations in Education and Training International, 40(1), 16-26. Chu, K., Chang, M., & Hsia, Y. (2004). Stimulating students to learn with accuracy counter based on competitive learning. In Proceedings of the IEEE International Conference on Advanced Learning Technologies (ICALT’04) (pp. 786-788). IEEE Computer Society. Covington, M. V. (1998). The will to learn: A guide for motivating young people. New York: Cambridge University Press. Frees, S., & Kessler, G. D. (2004). Developing collaborative tools to promote communication and active learning in academia. In Proceedings of the 34th Annual Conference Frontiers in Education (FIE’04) (Vol. 3, pp. S3B/20 - S3B/25). Piscataway, NJ: IEEE. Godoy, L. A. (2005). Learning-by-doing in a Webbased simulated environment. In Proceedings of the 6th International Conference on Information Technology Based Higher Education and Training (ITHET 2005) (pp. F4C/7 - F4C/10). Piscataway, NJ: IEEE. Hislop, G. W. (1999). Anytime, anyplace learning in an online graduate professional degree program. Group Decision and Negotiation, 8, 385-390. Johnson, R., & Johnson, D. W. (1998). Cooperative learning. Two heads learn better than one. Transforming Education, 18, 34. Kim, S., & Sonnenwald, D. H. (2002). Investigating the relationship between learning style preferences and teaching collaboration skills and technology: An exploratory study. In E. Toms (Ed.), Proceedings of the American Society of Information Science & Technology Annual
Telematic Environments and Competition-Based Methodologies
Conference (pp. 64-73). Medford, NJ: Information Today. Levy, Y. (2007). Comparing dropouts and persistence in e-learning courses. Computers & Education 48, 185-204. McCarthy, J. P., & Anderson, L. (2000). Active learning techniques vs. traditional teaching styles: Two experiments from history and political science. Innovative Higher Education, 24(4), 279-294. Mehlenbacher, B., Miller, C. R., Covington, D., & Larsen, J.S. (2000). Active and interactive learning online: A comparison of Web-based and conventional writing classes. IEEE Transactions on Professional Communication, 43(2), 166-184. Mickle, M. H., Shuman, L., & Spring, M. (2004). Active learning courses on the cutting edge of technology. In Proceedings of the 34th Annual Conference Frontiers in Education (FIE’04) (Vol. 1, pp. T2F/19 – T2F/23). Piscataway, NJ: IEEE. Reichlmayr, T. (2005). Enhancing the student project team experience with blended learning techniques. In Proceedings of the 35th Annual Conference Frontiers in Education (FIE’05) (pp. T4F/6- T4F/11). Piscataway, NJ: IEEE. Symons, S., & Symons, D. (2002). Using the Inter- and Intranet in a university introductory psychology course to promote active learning. In Proceedings of the International Conference on Computers in Education (ICCE’02) (Vol. 2, pp. 844- 845). IEEE Computer Society.
Titcomb, S. L., Foote, R. M., & Carpenter, H. J. (2004). A model for a successful high school engineering design competition. In Proceedings of the 34th Annual Conference Frontiers in Education (FIE’04) (Vol. 1, pp. 138-141). Piscataway, NJ: IEEE. Van der Linde, G. (2005). The perception of business students at PUCMM of the use of collaborative learning using the BSCW as a tool. In Proceedings of the 6th International Conference on Information Technology Based Higher Education and Training (ITHET 2005) (pp. F2D/10- F2D/15). Piscataway, NJ: IEEE. Verdú, E., Regueras, L. M., Verdú, M. J., Pérez M. A., & de Castro, J. P. (2006). Improving the higher education through technology-based active methodologies: A case study. WSEAS Transactions on Advances in Engineering Education, 3(7), 649-656. Verdú, E., Verdú, M. J., García, J., & López, R. (Eds.). (2006). Best practices in e-learning: Towards a tecnology-based and quality education. Valladolid, Spain: Boecillo Editora Multimedia. Yu, F. Y., Chang L. J., Liu Y. H., & Chan, T. W. (2002). Learning preferences towards computerised competitive modes. Journal of ComputerAssisted Learning, 18(3), 341-350. Zywno, M. S., & Waalen, J. K. (2002). The effect of individual learning styles on student outcomes in technology-enabled education. Global Journal of Engineering Education, 6(1), 35-44.
Timmerman, B., & Lingard, R. (2003). Assessment of active learning with upper division computer science students. In Proceedings of the 33rd Annual Conference Frontiers in Education (FIE’03) (Vol. 3, pp. S1D/7 - S1D/12). Piscataway, NJ: IEEE.
0
Chapter XIV
Open Source LMS Customization: A Moodle Statistical Control Application Miguel Ángel Conde González Universidad de Salamanca, Spain Carlos Muñoz Martín CLAY Formación Internacional, Spain Alberto Velasco Florines CLAY Formación Internacional, Spain
ABstrAct This paper reflects the possibility of doing adaptations on a learning management system (LMS) depending on the necessities of a company or institution. In this case, ACEM allows the definition of courselevel and platform-level reports and the automatic generation of certificates and diplomas for Moodle LMS. These adaptations are intended to complement all the different learning platforms by contributing added-value features like the generation of customizable diplomas and certificates and reports, which allow the obtaining information about both grades and participation in every activity of a course. All this necessities are not provided by default.
Copyright © 2008, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
Open Source LMS Customization
IntroductIon Lately both state and private institutions are betting on e-learning solutions so as to satisfy their formation necessities. Any of the kinds of formation which can be offered—in person, blended, and online—should rely on a complete and flexible enough technological support, that is, a learning management system (LMS). It can be used as a support for in-person learning and as a base of the virtual part for the other two types. These systems are in charge of the different actions involved in the online formation process, which include the student management and providing them with resources and activities. Because of this, e-learning cannot be implemented without the support of an LMS. Nowadays, many different LMS can be found in the market, which must be classified into commercial ones and free distribution ones. Commercial LMS generally offer a higher level of customization, since the companies which develop them can accomplish additional features by incrementing the final price. On the other hand, free software-based LMS have several advantages due to its free software condition. However, these platforms are not adequate in some cases. Among free distribution LMS, we must mention Moodle, which is a free software project designed to support a framework for a constructive social education (Comezaña & Garcia, 2005). Currently, it is one of the most extended LMS in the world, both in state and private institutions. It also has a large community of users and developers. Despite this large community behind it, Moodle does not adapt to all the necessities that can arise. The problem arises specifically from a particular necessity of Centro Internacional de Tecnologías Avanzadas (CITA), belonging to Fundación Germán Sánchez Ruipérez. Its objective is achieving an optimal qualification management and a system which allows them to generate diplomas and certificates automatically.
This situation must be resolved without providing a new platform but developing an independent Web application which can be used on top of the instance of Moodle being used now. In addition, the use of the product can be extended to any other instance of this LMS. In order to resolve this necessity, the creation of ACEM is proposed. This will be an application independent of Moodle and will provide the desired functionality without having to alter the data obtained from any instance of this LMS. Some of the current LMS in the market will be described next. It will be detailed their limitations about dealing with grades and their management of certificates and diplomas, with special interest in Moodle as it is the starting point of the problem. After that, we comment on the development model in Moodle, the difficulties which are raised depending on the distribution of the data in the LMS. Finally, the developed product is shown and the conclusions and future work lines are listed.
current PLAtForms And theIr LImItAtIons New necessities arise from the experience with the usage of knowledge management platforms, which current systems cannot resolve. It is necessary to understand and realize the limitations which these platforms have and also the objectives which are intended to achieve, so that they do not involve an obstacle when deploying. Although it is obvious that there is not a panacea capable of solving all the limitations which these platforms show, it is simple to observe, by going into their features in depth, that some basic functions are not supported. The first limitation which arises when dealing with LMS platforms is just its own definition. They are systems for managing the learning process but generally they do not include other interesting aspects, like an adequate content management
Open Source LMS Customization
which enables the creation of content within the platform, that is, the typical features of a content management system (CMS) which turns out to be very useful for this kind of platforms. The so-called learning content management system (LCMS) arise so as to solve this separation, but its extension, apart from scarce, is basically reduced to proprietary tools for specific usages (Rengarajan, 2001). Another limitation shown by LMS platforms is the absence of implementing standards which enable an easy migration of learning objects between platforms (Maurer, 2004). Here comes the concept of reutilization, which is fundamental when talking about learning platforms. An approximation which represents a solution is the implantation of the necessary functionality for the correct processing of packages, like sharable content object reference model (SCORM) (Jones, 2002). Thus, learning objects can be transferred between platforms without having to redefine the contents on each platform. Another lack of functionality in LMS platforms is related to the graphical user interface management, which is derived from the features of any Web application. Although the concentration for this kind of platforms should be in the learning process itself and the quality of contents, important interface aspects like Web accessibility should never be neglected. From a statistic point of view, there are usually not platforms which have a wide range of statistics about their usage, their administration, or, more specifically, the grades of the students who use them. What is more, they do not include any graphical representations which help to take decisions about, for example, the way of posing or presenting contents based on the information taken from the system (about users, administrators, teachers, students, and grades, above all). Taking into account the importance of some aspects like feedback, the features previously mentioned become necessary if a high level of satisfaction of all the involved parts is desired.
And this satisfaction is a key element for the success of any kind of platform, especially in the case of a LMS platform which does not pose a total innovation but a new way of understanding the learning process. An unsupported concept by LMS platforms arises as a result of these statistic properties: the idea of portfolio associated to each student, seen as a curriculum which shows the student’s progress and achievements or even information related to learning modalities like in-person or blended (blended learning). Taking the different phases of the learning process into account, many platforms in the market cannot cover the whole process due to, basically, the lack of methods for certificating or generating documentation which proves the acquisition of certain knowledge. Finally, setting aside the technical features, another handicap which these platforms have to face is their necessity to be correctly managed by qualified personnel, not only from the academic point of view, but from a technical point of view. This will help to obtain the desired effect and make the learning process be productive and satisfactory. In general, these limitations are not applicable to every single existing platform but each platform presents a subset of these limitations which forms a base for the implementation of possible improvements. The identification of the weakest points of those platforms will depend on the objectives or requirements which are intended to fulfil and, therefore, a research for helping to choose the most adequate option should be made.
the cAse oF moodLe Moodle is a kind of LMS platform aimed basically to provide a set of tools and structures which enable adapting the learning model to an online one. As a result of the nature of LMS platforms and Moodle in special, the lack of a direct communication between the teacher and the students,
Open Source LMS Customization
or even, among the students, makes the system responsible for this communication as it is the mediator, whereas in other types of learning these responsibilities rely on personal interaction. These refer to the necessity of providing a set of functionality which will substitute the exchange of information between teacher and student and among students. In order to achieve this goal, the techniques used are based on statistic methods for compiling information. This information includes important aspects like the measure of the students’ grades which are generated by the management modules and for which these systems provide. As for Moodle, although it does provide methods for evaluating the different activities which can be done, it does not generate relevant statistics about the evolution of the students based on either their grades or other measures considered useful for these cases. Besides these problems, the grade management system of Moodle has its own gaps. Moodle groups the students of a subject of course into groups, and each of them can have a disjoint set of those students. This layout, along with the characteristics of the group, enables the interaction with the students depending on the definition and permissions associated with that group (Castro, 2007). However, the problem arises when establishing the grades. This organization in groups is not extended to the classification of students and therefore no reports of grades are provided taking this into account. Because of this, the teacher is forced to do the segregation, which is really tedious, even more when dealing with courses with a high number of students. As for the achievement of the results of the course, grades obviously are the main scale of measure, though it is influenced by other aspects like the frequency of visits to the different activities suggested. Considering grades as the main element of the metric, some kind of organization of the students depending on certain ranges of grades is missed in Moodle, which will allow the
user to obtain the percentage of those who are included in each range. Thus, the quality of the knowledge penetration on the students could be measured in a moderately accurate way, depending on the number of factors taken into account: the more factors, the more accurate measure. This measure, along with other aspects, could be established as highly adequate rates so that the teachers could readapt the contents in order to increase the reach of the required knowledge. This content customization is very important when dealing with platforms like this or similar. Another possible factor to be measured should be the presentation rate of those activities, that is, the percentage of students who either have not done the activity or have not handed it in. In addition, reports listing those students could be generated. As it can be observed, the possibilities of system improvement that Moodle leaves open are considerable, especially the grade system, and an improvement would contribute positively to achieve the goals of the LMS platforms in general and of Moodle, in particular. However, there is another relevant limitation shared by most of the LMS platforms, besides those which have already been exposed: the inability to cover the whole learning process, especially the final phases. More specifically, Moodle does not allow the generation of any kind of certification for each student which would identify him as a holder of the knowledge learned in the course once it has finished. This would require some kind of module which allow the generation of diplomas, in an automatic and systematic way for every student. These documents should be also fully customizable so as to adequate to the characteristics of each course, based on its modality or other aspects. Besides all these limitations, Moodle has some positive features which make it one of the most used LMS platforms. The objective of the description of its limitations is that though the functionality of these platforms is usually wide and flexible, they are not able to cover all the goals considered relevant by those who use them.
Open Source LMS Customization
The development in Moodle can be tackled in different ways depending on the desired level of independence from the platform of the software to develop:
tion and taking into account application goals, two architectures can be proposed, in order to obtain the maximum scalability, which allows an easy reutilization of the application for later versions of Moodle. It is up to a client’s decision as to the type of architecture to be implemented since each one will have different goals and estimations.
•
suggested Architectures
the deveLoPment modeL In moodLe
•
External applications. An application which uses the database of the platform without having to make any integration with its interface. Module development. It consists on building new modules which can be integrated into the platform. That integration involves defining an installation in the database or even an inclusion into the interface of Moodle. There is a lot of documentation about the style of coding for this type of developments and some libraries of functions for the database management of Moodle as well (http://docs. moodle.org).
Based on the client’s necessities, the first option is chosen. These necessities specify certain requirements whose fulfilment is more adequate to a nonintegrated solution. Considering this opFigure 1. ACEM first architecture proposal
First, a solution based on a separated application with access to the database of Moodle is suggested. A diagram representing this architecture is shown in Figure 1. In this solution, the creation of an application externally from Moodle is suggested. This application will access its database directly and it will be structured in the following layers: •
Access and queries to Moodle DB: This layer will contain the necessary functions to perform queries on a Moodle database. It is proposed that the utilization of query description files which enable the queries to be changed in a simple way depending on the changes done on Moodle database. This helps to achieve a certain level of
Open Source LMS Customization
•
•
compatibility with later versions of Moodle by doing minimal changes, although the objective of the application is to support Moodle version 1.6. This layer also abstracts the statistic control functions layer from the data searching process. Statistic control functions layer: This layer will use the functions located in the previous layer so as to perform the statistic control. This layer will implement the logic of the application. GUI: Graphical user interface of the application.
The second architecture suggested is based on an application linked to Moodle which uses the new database access features of version 1.7 with full support for version 1.6. This architecture is shown in Figure 2. In this case, the layers to be developed are these: •
Statistic control functions layer: This layer will implement the logic of the application.
•
GUI: Graphical user interface of the application.
Here, the database access is performed via the new application programming interface (API) which Moodle provides in version 1.7: Moodle DML Library (Lafuente & Hunt, 2007) and Moodle DDL Library. Access for data extracting for Moodle version 1.6 will also be provided. Finally, the client chooses the first option.
obtaining the data model The development can start once its model has been defined. Developing Moodle based software involves several difficulties. One of the most representatives is the little knowledge about Moodle database because neither its data model nor the used tables are public. Thus, in order to find out that information, reverse engineering (Hainaut, Tonneau, Joris, & Chandelon, 1993) methods have to be used. In this case, a tool called DBDesigner (http://www.fabforce.net/dbdesigner4/) has been used since it enables data model extraction from the own database currently installed. Considering the data model obtained, research must be done so
Figure 2. ACEM second architecture proposal
Open Source LMS Customization
as to determine how Moodle fills the data tables corresponding to the element to be analysed. Specifically, one or more tables will be queried for each of them because of the data distribution in Moodle and the necessities of the application as well. Some of those tables are: •
•
•
•
•
User related tables: They will give information about the users of the platform, that is, the students of the courses. This information is necessary in order to generate reports, certificates, and diplomas. Resource related tables: They will give information about the existing resources in the platform and the way they can be accessed. Graded activity related table: These will allow the calculation of the grades at courselevel and platform-level. Not graded activity related tables: These are used for doing a list of activities for the reports. Course related tables: These will give all the information about the existing courses in the platform along with an identifier which enables the relation to both resource and activity tables.
Once the information necessities and the way of retrieving it are defined, some research on the possible situations which might involve an update of the data stored in the tables mentioned before needs to be done. Special care is needed when generating grades per activity. The table which associates the different activities with a course is built from the data of many other tables and the information is updated only when the teacher or student access the grades section of a certain course. This means that, when generating a platform-level report, it would be necessary to access the grades of each course manually every time a change happens. The option chosen to avoid that is “touching” the courses involved in the desired report. This method will
consist of logging in each course automatically and accessing the grades Web page (grades.php) in a totally transparent way for the user. This technique is effective but after some testing, it was noticed that Moodle might have an alternative login page, so this case must be taken into account (that piece of information is asked the user when configuring the application). Nowadays, some significant delay is produced when testing this technique with a large number of students. Because of that, it might be changed for retrieving that information by doing direct queries on the database as it is done when dealing with grades.
An error in the grade system After researching grade calculation methods, a possible defect in the grade system of Moodle has appeared. The platform weights all the grades and can also weight them a second time if the correspondent option has been enabled by the user. It appears that this might lead to an error, because the final grade of a student does not correspond to the logic expected. It has been observed that Moodle does weight grades although it has been asked not to do it (Figure 3) when doing some testing: In Figure 3, the attention must be paid on user “apellido 2, alumno 2.” This user has finished his three graded activities in which he has got, respectively, 4 out of 10, 16.5 out of 20, and 3.3 out of 10. Moodle adds the user’s grades and then divides the result by the sum of the maximum grades of each activity he has done: 23.8/90 = 0.264, which is equivalent to 26.4%. This situation does not reflects correctly the grades because, this way, an activity rated in a scale of 100 will always be more worthy than an activity rated in a scale of 10, although both have the same weight. The correct calculation is weighting each grade first and then doing the average so that the global grade is logical. In the previous example, the results would be these, when normalizing to 10:
Open Source LMS Customization
Figure 3. Moodle grades
GEnERaToR oF REPoRTS, DiPLomaS, anD CERTiFiCaTES In the following lines, the functionality obtained in the generator of reports will be described, its main elements, and the difficulties found during the development.
Functionality ACEM was built as an application for a certain client but after considering the limitations of Moodle which have been overcome, we decided to use it as an add-value component for that learning platform. The application implements the following functionality: •
(4/10)*10 = 4, (16./20)*10 = 8.25 and (3.3/10)*10 = 3.3. So the average value would be: (4+8.25+3.3)/8 = 1.94; whereas according to Moodle calculations it would be: (23.8/90 )*10 = 2.64. Thus, in the case of ACEM, all the grades will be considered equally no matter the scale of each element.
3n. This option enables the obtaining of statistic data related to the whole platform. The generated report will reflect data related to all the students in the campus, the users, some generic information about the courses, the resources and the activities grouped by their type, as well as information about the grades in the platform. The user is allowed to decide whether to include or not those courses without graded activities, which involves a change in the final grade since more courses are used to compute the statistics. Information of final report can be complemented with data typed by the user means of a WYSIWYG (What You See Is What You Get) editor. The data of the generated document are shown in several ways (lists, tables, charts, etc.) as it can be observed on figures 4, 5, 6, and 7.
Any of the documents generated can be obtained in different formats like PDF, HTML, or MS-WORD (.doc). •
Course-level report generation: It allows a user to obtain statistic data of a certain
Open Source LMS Customization
Figure 4. Information in list mode
Figure 5. Information in table
Figure 6. Bar chart
Open Source LMS Customization
Figure 7. Pie chart
•
course. The information of this report is about the number of students, resources, and activities of that course and the grades as well. The user is allowed to choose the students and grades to see. Later in the report, each student’s grades are shown grouped by subject. Like in the previous case, the user is allowed to enter a head text and footer by means of an editor. It can also be exported to different formats. Diplomas and certificate generation: The application also provides a tool for defining customized diplomas for the students of a course. Some of the elements which can be configured are the image logos (which are placed automatically), the title, the name of the institution, and so forth. The diplomas can be obtained one by one or by groups of students.
some components of the Application Three different layers are distinguished in the architecture shown in Figure 1. One of those is
the graphical user interface (GUI), which is a fundamental element of any application. Other elements which are not shown in the diagram are the authentication component and the installation component. •
•
GUI: It is very important for any application to have a user interface which enables a simple and efficient access to its functionality (Gándara, 1995). As for ACEM, it has been tried to design a user interface as efficient, light, friendly, and functional as possible. From the main page, all the available options are accessible and it is possible to come back here from within any of them. The technologies used for building it are HTML, PHP, and Cascade Styling Sheets (CCS). Authentication: ACEM deals with personal data which include grades of several courses, so the access to this data must be controlled somehow. In order to do this, a component for authentication will check if the user that tries to access to the system has logged in correctly. If not, he will have the chance to
Open Source LMS Customization
Figure 8. ACEM GUI
•
•
•
• •
0
log in but he can never access the data without having been authenticated. The system uses HTTP sessions (Welling & Thomson, 2003) to check it. The username and password are specified during the installation process. Installation: As any other application, ACEM will need a process of installation and configuration. In this particular case, it will be checked if a file called “config.php” exists and if it does not exist, the Web page for the basic configuration of the application will be shown (Figure 9). In this window there are several data which have to be entered so that the application can work properly: The type of the database which will be used along with its name, a user, and the correspondent password. The Moodle administrator’s username and its password. This is required in order to be able to generate the grades. The URL of the installation of Moodle to be used and the one of the login Web page. The username and password to access ACEM.
Difficulties Found During the Development The development of any software applications involves some difficulties. In this particular case, most of them have appeared from the necessity of researching on different PHP libraries which were not known by the developers. Another big problem is the time efficiency of the suggested solutions. Some research on several libraries had to be done in order to be able to perform the different kind of actions needed, like a library for generating charts in PHP, another for creating PDF files, and so forth. The first option for each of them has always been a free software one. For the generation of charts, we chose a free software PHP library called PHPlot (http://sourceforge.net/projects/phplot/) which allows developers to generate a wide range of charts like bars charts or pie charts. As for PDF files, the first choice was using FPDF library (http://www.fpdf.org/) for generating the certificates and diplomas and DOM PDF (http://sourceforge.net/projects/dompdf/) in the case of reports. FPDF allows the generation of
Open Source LMS Customization
Figure 9. ACEM configuration page
complex documents but it needs every element to be positioned within the document, what can be really difficult for large documents. This is the reason why DOM PDF is used for generating the reports, as this library allows the automatic conversion from HTML to PDF. Thus, the report is initially formatted in HTML and then, by invocating one function of the library, the final PDF document is obtained. However, when the number of students, and thus the size of the documents generated grows, the time spent in generating the reports is unacceptable. Therefore, the generation of reports will be done using FPDF library as well, although it involves much more time to develop. For generating MS-WORD documents (DOC), several libraries researched, but none of them worked properly so the final choice was generating a HTML document and then changing its file extension to .doc because, currently, MS-WORD converts automatically from HTML to DOC keeping the structure of the document practically intact.
Another difficulty is the increase of time due to the large number of database queries to be done. At first, the queries were executed without control obtaining excessive execution times. Then, the number of queries to execute was reduced and a cache of query results was implemented. Thanks to this, the execution time was reduced to a fourth of the original time, though we are still working on it.
concLusIon An added-value tool for a LMS platform has been developed by using Web technologies, which is currently under exploitation by the client. By observing the different LMS platforms in the market, it can be noticed that most of them do not provide enough graphical representation about the students’ activity in the courses and they do not allow the generation of certificates or diplomas either. What is more, Moodle, which
Open Source LMS Customization
is one of the most used LMS platforms, does not include those features. Several options for developing the application have been taken into account and, based on the client’s needs, an external application has been defined. ACEM allows users to obtain information in many ways, like documents, charts, or diplomas which enhance the information stored by the platform. Nowadays, this kind of information is fundamental for the management of studies in a learning platform. The development of the application has involved doing an exhaustive research on the data model of Moodle, its management, and the way it deals with grades. After doing it and resolving some problems, a set of queries has been defined so that the application can use them for obtaining the requested reports. Due to the fact that application aim is the generation of the most complete, representative and useful reports, the use of several libraries, which enable the inclusion of charts and reports to be exported to the most usual formats, is necessary. The current version of the application is the first, although development will be continued so as to improve its functionality, thus enhancing Moodle.
•
•
•
•
be searched for in order to avoid doing the current conversion. Therefore, some commercial library could be used. Reduce report generation times. This is a really important point to be taken into account. It can be accomplished by reducing the number of queries executed, improving the cache of queries or even redefining the business logic. Include new graded activities. It might be taken into account introducing certain graded elements which are not considered now or even other elements which will appear in later versions of Moodle. Adaptation to new versions of Moodle. Since Moodle is continuously developing, ACEM should adapt to those changes. In order to do that, new versions of Moodle have to be analysed, including its database and also the new APIs available like Moodle DML Library and Moodle DDL Library. Build ACEM as a module of Moodle. In the near future, the possibility of adapting ACEM to an integrated module of Moodle should be considered, with all the work which it will involve.
reFerences Future work LInes As for the possible evolutions of the application, there are several options to work with. •
•
Different methods for graphical representation. Visualization of information is an area of computer science which is developing these days (Rohrer & Swing, 1997). A bar char or a pie chart is not enough. Charts with much richer information are needed and the technology is improving in that direction. Improve MS-WORD document generation. A solution for a better generation should
Castro, E. (2007). Moodle: Manual del professor. Retrieved October 28, 2007, from http://moodle. org/file.php/11/manual_del_profesor/Manualprofesor.pdf Comezaña, O., & García, F. J. (2005). Plataformas para educación basada en Web: Herramientas, procesos de evaluación y seguridad (Tech Rep. DPTOIA-IT-2005-001). España, Salamanca: Universidad de Salamanca, Departamento de Informática y Automática. Gándara, M. (1995). User Interface: An introduction for educators. In J. M. Alvarez-Manilla and A. M. Bañuelos (Eds.), Computer pedagogical uses. Mexico CISE/UNAM.
Open Source LMS Customization
Hainaut, J., Tonneau, C., Joris, M., & Chandelon, M. (1993). Transformation based database reverse engineering. In R. Elmasri, V. Kouramajian & B. Thalheim (Eds.), Conference on Entity Relationship Approach (pp. 364-375). Springer. Jones, E. R., (2002). Implications of SCORM™ and emerging e-learning standards on engineering education. In ASEE Gulf-Southwest Annual Conference (pp. 20-22). Lafuente, E., & Hunt, T. (2007) Development: XMLDB documentation. Retrieved October 28, 2007, from http://docs.moodle.org/en/Development:XMLDB_Documentation Maurer, W. (2004). Estándares e-learning. SEESCYT. Retrieved October 28, 2007, from http://fgsnet.nova.edu/cread2/pdf/Maurer1.pdf Rengarajan, R. (2001). LCMS and LMS: Taking advantage of tight integration. Click 2 Learn. Retrieved October 28, 2007, from http://www. e-learn.cz/soubory/lcms_and_lms.pdf
AddItIonAL reAdIngs Castro, E. (2007). Moodle: Manual del professor. Retrieved October 28, 2007, from http://moodle. org/file.php/11/manual_del_profesor/Manualprofesor.pdf Koper, R., & Tattersall, C. (Eds.). (2005). Learning design: A handbook on modelling and delivering networked education and training. Springer. Lambropoulos, N., & Zaphiris, P. (Eds.). (2007). User-centered design of online learning communities. Hershey, PA: IGI Global. Spence, R. (Ed.). (2007). Information visualization: Design for interaction. Prentice Hall. Welling, L., & Thomson, L. (Eds.). (2003). Php and Mysql Web development. Developer’s Library. William, R. (Ed.). (2006). Moodle e-learning course development: E-learning course development. UK: Packt Publishing.
Rohrer, R.M., & Swing, E. (1997). Web-based information visualization. Computer Graphics and Applications, IEEE, 17(1/4), 52-59. Welling, L., & Thomson, L. (2003). Using session control in PHP. In Sams Publishing (Ed.), Php and mysql Web development. Developer’s Library.
Chapter XV
Evaluation and Effective Learning:
Strategic Use of E-Portfolio as an Alternative Assessment at University Nuria Hernández Nanclares Universidad de Oviedo, Spain
ABstrAct This chapter analyses evaluation as a strategic instrument to promote active and significant learning and how, in that strategy, the use of alternative assessment and technology-aided learning-and-teaching processes could be of great help. There is an important margin to allow the teachers to design the assessment in a strategic manner and modify the nature of the students’ learning activities. So, the central question is analysing whether the use of an electronic portfolio as an assessment tool in the subject “International Economic Relations,” has been used strategically. In other words, is the type of desired learning really being achieved? Is significant and deep learning being stimulated? If not, what kind of learning is being stimulated? How should the assessment be modified to achieve the desired results? To help answer all these questions, we have analysed whether the activities and products which make up the “International Economic Relations” portfolio fulfil the conditions that characterise a strategic evaluation.
IntroductIon The present chapter analyses evaluation as a strategic instrument to promote active and significant learning and how, in that strategy, the use
of alternative assessment and technology-aided learning-and-teaching processes could be of great help. Therefore, the principal aim of the chapter is to discuss the difficulties that assessment processes have and how the use of digital learning
Copyright © 2008, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
Evaluation and Effective Learning
platforms to complement teaching could help to use evaluation as a strategic decision to achieve better learning. The method used to raise the question is to expose an educational innovation that has been developed for several years in the University of Oviedo. The subject I teach in the faculty of Economics “International Economic Relations” (REI) is the context I have used to develop an electronic portfolio as an assessment tool which helps me to evaluate the students and to implicate them in more active learning. This is an optional subject in business administration with 150-175 students in three groups, morning and evening. The technological aspect of the portfolio is supported by the electronic learning platform in University of Oviedo, Aulanet, nowadays ruled in Moodle. The theoretical framework in which this chapter is inspired has two complementary dimensions. On the one hand, we have to consider the crucial aspect of evaluation at university and the role teachers have to develop as assessment agents. To evaluate the learning achieved by students is a big challenge for teachers and for institutions, especially in a context where the learning of competences and abilities are of much higher importance. As a result, evaluation has to be revisited, being used as a strategic tool to promote effective learning and an alternative to assessment options. On the other hand, the chapter is concerned with the use of technology in education, exploring the possibilities that virtual educational platforms and ITC’s have in helping the use of alternative assessment procedures to enhance active learning. The context is the technology-aided learning-andteaching processes. I don’t employ the more extended expression of “technology based learning” because I agree with the idea that technology can help learning but it cannot replace the personal and social dimensions of learning. Bearing in mind all these questions, the rest of the chapter is devoted first to explore what type
of learning we want in the modern university and how to use the evaluation as a strategic element of the teaching process. Second, to explore the possibilities of using the portfolio as an alternative assessment tool, I describe my own experience with an electronic version of this evaluation methodology. Finally, I critically review the REI evaluation innovation and compare it with what is considered by experts as a strategic assessment tool to enhance effective learning. Hence, the experience is described in detail trying to find out if the conditions of a tactical evaluation are present in it.
aSSESSmEnT aS a STRaTEGy, sIgnIFIcAnt LeArnIng As A goAL One of the basic objectives of the present-day university is to achieve effective teaching: teaching by transforming the lecture room into a learning space, thus endowing the classes with an additional value that justifies the students’ presence in them. This approach means that students must be active subjects of their own learning, they must experience for themselves the changes in the way of thinking, feeling, and acting that the learning process produces in them. To achieve this, it must be the learners who, with the teacher’s help, build their own learning. This requires the students to become involved in the process, to be willing to take part in the learning opportunities suggested in the lecture room, and to continue the process until they reach a high level of independence in it. It is essential, therefore, that the students should be highly motivated. Some students are capable of motivating themselves, but the majority need external stimuli to arouse interest within them in the task to be carried out and the knowledge to be acquired. One of the aspects that has the greatest influence on the students and their involvement in the learning process is the way they expect
Evaluation and Effective Learning
to be assessed under the system of assessment established by the teachers. Studies carried out in the 1970s and 1980s ( Miller & Parlet, 1974; Snyder, 1974) indicate that the assessment in fact has more influence on the learning process than the actual teaching does. Furthermore, students react to changes in the system of assessment by modifying the way in which they tackle tasks and by generating learning activities adapted to the assessment requirements (Sambell & McDowell, 1998). Hence, a relevant question to ask is what kind of learning is generated by the different assessment systems we are using and whether it is the right kind for the requirements associated with the new situation. The evidence seems to suggest that on many occasions the assessment normally utilized does not stimulate the right kind of learning, that is to say, learning of a significant nature. A margin therefore exists in which the system of assessment, as a basic teaching tool, can be used strategically, motivating the student to acquire a specific type of learning. Clearly, the kind of learning we want to motivate is an effective and significant one in which the students build up their own knowledge, alone or in a group, and in an active manner in order to acquire conceptual, procedural, and behavioural knowledge. Previous knowledge on the part of the students is fundamental to this type of learning since the need to connect the new to the old is a basic premise if the students are to acquire really significant learning. According to its constructive focussing, the learning process implies the students evolve from the structure of their previous knowledge. Constructivists therefore propose the use of organized structures of concepts that form the framework within which the students can structure new materials related to their previous knowledge. Consequently, it is very convenient to make the students aware of their knowledge of a specific subject, presenting them with some type of material (oral, written, graphic, audiovisual,
etc.) which will help them to characterize the structure and organization of the new knowledge, drawing their attention to it and enabling them to link new to existing knowledge. The use of active learning techniques is essential in this kind of learning. Active methodologies involve the students in their own learning and lead to assessment systems that go beyond conceptual contents, stimulating in the student an integral learning process of all the abilities required of professionals nowadays. Moreover, a thorough learning process has two additional dimensions which I consider to be relevant. On the one hand, it has an important social dimension since collaborative learning helps to achieve more effective and significant results than does individual learning alone. On the other hand, however, the personal dimension is essential from the moment when the student’s processes of reflection on their own learning (metacognition) form an essential element in the degree of thoroughness of this learning. From the foregoing and in view of the nature of the learning desired, the final assessment, which is normally individual, basically accumulative, and has a very low formative potential, should be replaced by a continuous assessment process with a significant diagnostic and formative character and with a high level of feedback for the student. In short, the assessment should be designed as a system of incentives aimed at achieving a certain type of learning: significant, active, and collaborative. Therefore, as mentioned previously, there is an important margin to allow the teachers to design the assessment in a strategic manner and modify the nature of the students’ learning activities. The assessment can be conceived as a system of incentives aimed at ensuring that the students adopt specific forms of behaviour directed towards achieving certain types of learning (Gibbs & Simpson, 2005). From this point of view, the assessment becomes a central teaching element, closely
Evaluation and Effective Learning
related to the learning objectives and decisive in the choice of contents and the establishing of tasks. The assessment is therefore not an isolated process undergone by students and then used to evaluate them but rather an activity in which they also participate and through which they can learn (Brown & Glasner, 1999). The following question to consider is which characteristics must the evaluation possess in order to be strategic and contribute to effective learning. According to Gibbs and Simpson (2005), the assessment systems must fulfil a number of conditions if they are to motivate the students to become involved in the learning tasks, modifying the way in which they would initially tackle them. The students are also strategic in the use of their time and effort and divide them up in order to obtain the best results in accordance with what they perceive as the assessment requirements. We must therefore bear in mind that the signals sent by the teachers through their assessment strategy do not always lead to the kind of learning basically desired. To achieve this type of learning, the assessment must fulfil several conditions that relate to the students’ effort, the feedback from the teacher, and how the students respond to this feedback (see Table 1).
As already stated, the students select the topics and the way of carrying out tasks depending on what they think will be asked of them in the exam. So the first condition the assessment has to meet is to guarantee a certain quantity and quality of effort on the part of the student. Hence, a system of assessment designed to make the students dedicate sufficient time and effort to the tasks assigned to them, distributing this effort throughout the entire process, meets the first condition. Moreover, the learning effort made by the student must reach a certain level of quality .For this purpose, the learning tasks and activities must be so devised that the student learns in an efficient manner, clearly demanding mental and cognitive processes of a high level to solve these tasks. If the assessment system is to stimulate effective and significant learning, it must itself become a learning activity whose development is designed as a learning process. For this to happen, the correction and feedback activities must play an essential part. The feedback from the teacher has to be of sufficient quantity, quality, and speed. It is important that the corrections and indications offered should be focussed more on assessing learning, particularly in its formative aspect, and less on assessing the student. The comments
Table 1. Conditions under which assessments supports student learning (Gibbs, Simpson, & Macdonald, 2003, p. 2) 1.Quantity and distribution of student effort Assessed tasks capture sufficient study time and effort. These task distribute student effort evenly across topics and weeks. 2.Quality and level of student effort These tasks engage students in productive learning activity. Assessment communicates clear and high expectations to students. 3.Quantity and timing of feedback Sufficient feedback is provided, both often enough and in enough detail. The feedback is provided quickly enough to be useful to students. 4.Quality of feedback Feedback focuses on learning rather than on marks or students themselves. Feedback is linked to the purpose of the assignment and to criteria. Feedback is understandable to students, given their sophistication. 5.Student response to feedback Feedback is received by students and attended to. Feedback is acted upon by students to improve their work or their learning.
Evaluation and Effective Learning
should be focussed on the objective for which the task was designed and limit themselves to the quality criteria previously established. They should preferably be in written form and comprehensible to the students; they need to be adapted to the students’ vision of the discipline since, as learners, the students are not always capable of having an overall idea of the subject and of what knowledge it implies. Finally, in order to be truly effective, the assessment system must guarantee that students receive the feedback, assimilate it, and react to it, modifying those matters that the correction has shown can be improved upon. The assessment must therefore be designed in such a way that the feedback is useful, making use of the indications for subsequent tasks and activities which thus incorporate the improvements indicated in the correction of the previous task. It is important to point out that all these elements, which should be taken into account when selecting the assessment strategy, are associated with the learner, the student being an active subject of the learning process, and not so much with the teacher or the teaching process. As I see it, the central idea suggested by Gibbs and Simpson (2005) is therefore that the assessment is related to the students’ learning, occupying a central position in their involvement in the process, leaving the teachers the possibility of using it to direct their teaching and achieve the right kind of learning. This reinforces the idea that the learner, with all his internal learning processes and motivation, plays a central role in the teaching-learning process. The teacher, as an expert in the subject and in the best way of learning it, thereby becomes a provider of the process mentioned, endowing his teaching and assessment decisions with a strategic character that guarantees the best results. This activity of the teacher as a mediator in the learning process of his students and a strategist in the use of teaching and assessment does not come cheaply (Feuerstein, 1990; Feuerstein et al., 1980). A great effort is required, both in terms of time
and in formation and conviction, for the employment of active methodologies and the preparation of the feedback necessary to make the correction of activities effective. New technologies and their application in teaching may alleviate the burden somewhat in this process. The possibilities offered by the Web and virtual teaching platforms mean that computers have become an essential work tool and that the means of communication between teacher and students and among the latter have multiplied. Thus, technology plays a major part in various aspects of the teaching-learning process, among which are the use of collaborative work and the possibilities of the strategic use of assessment, a matter of concern to us. Information and communication technologies (ICT’s) can help in the design and programming of activities, encouraging a specific distribution of work time by establishing work procedures directed towards more productive learning, communicating the results of the correction process to the students, and establishing feedback between the students and the teacher, briefly by involving the students more in the tasks and making them aware of the level of quality that these should possess.
the eLectronIc PortFoLIo oF “InternAtIonAL economIc reLAtIons” This section describes the teaching innovation carried out in the Department of Applied Economics at the University of Oviedo for the subject “International Economic Relations.” The innovation is designed as an alternative assessment that permits evaluation of more than conceptual knowledge. One of the instruments that can be used to carry out this type of alternative assessment is the student’s portfolio. The portfolio is a technique of collection, compilation, and repertoire of evidences and professional competences that qualify a person for a satisfactory professional development. So,
Evaluation and Effective Learning
this tool is a collection of processes and products which result from the student carrying out the learning activities designed by the teacher to achieve certain objectives. These objectives should be fixed for the acquisition of abilities using suitable contents that have been selected for this purpose. Originally, the use of portfolios appeared in artists disciplines, especially in architecture and design, as a form to demonstrate professional capacities in the labour market. In the educational field, the portfolios are adapted as a teaching and assessment technique used as an alternative method of evaluating the capacities and abilities the students have acquired during the learning process (Klenowski, 2002; Klenowski, Askew, & Carnell, 2006). Different options exist regarding the contents of the portfolio. They range from compulsory contents, where the tasks to be performed are decided by the teacher, to pure portfolios, where it is the student or group of students responsible for the portfolio who decides which of their learning products are the best. This instrument permits continuous assessment since the tasks can be carried out one after the other, throughout the learning period, and the selected products can be corrected by the teacher to reach the formative goal. At the same time, it is possible to ask for the inclusion in the portfolio of artefacts which show the learning processes followed by the student, allowing these processes to be assessed. This is of particular interest when we are dealing with collaborative portfolios and it is necessary to assess the progress of the work group. Summing up, if an alternative, continuous, formative assessment is desired, the student’s portfolio may be a good option. The use of a different approach to the assessment of the student learning, based in portfolio, has been used in many different situations. There are many references in literature to experiences using portfolio based assessment in any levels and in higher education, too. For example, and among many others, Cooper (1996) describes
the use of portfolio in the practicum assessment for the professional training course for part time youth workers in the United Kingdom. JohnsonBogart (1995) explains the use of a portfolio in the University of Washington’s Interdisciplinary Writing Program. In this experience, it is of special importance the opportunity portfolio gives students of being self reflective about their own writing. Spillane (1999) explains how portfolio could be used in Long-Life Learning adult education and Williams, Davis, Metcalf, and Covington (2003) apply this assessment system to teacher education programs. All of them find useful and valuable this alternative method of evaluation, although to make such assessment more widely accepted further research, comparing portfolio with conventional method, has to be conducted. In the way of making alternative assessment methods more accepted and used the evolution of new technologies could be of great help. In the case of portfolios, the development of electronic portfolios could play a valuable role in higher education (Challis, 2005). In Barragan (2005) and in Agra, Gewerc, and Montero (2002) experiences in using portfolio in Spanish University can be found.
Description of the innovation The essential innovative teaching aspect in “International Economic Relations” is to propose a change in the way of assessing the knowledge acquired by the students with the purpose of stimulating active learning and encouraging attendance in classes by transforming them into a space for collaborative activities where interactions between pairs, both within the collaborative group and the large one and also between the students and teachers, should form the focal point of activity in the lecture room. This helps give the classes a value of their own, a work time and place whose products will be assessed by means of an electronic portfolio linked to the collaborative work both inside and outside of the class.
Evaluation and Effective Learning
This experience started during the 2001-02 academic year and was repeated during the following years, with a number of changes and improvements relating to the learning objectives and activities through which these objectives are to be achieved and also certain modifications in the actual development of the classes and in the way of assessing attendance, work done, and the contents of the file. The thorough renovation of the subject was completed in the 2004-05 academic year, with the managing of the portfolio passing over to the virtual teaching platform of the University of Oviedo, Aulanet, thus converting it into an electronic portfolio. This portfolio is called electronic because of the use of ITC’s and virtual platform as support and infrastructure in the design of the course. This is not the typical “on-line” subject where the students have to do the work by their own at home. On the contrary, the tasks are distributed over different times: classroom time, working group time, individual time, and deadline time. The connection between these times is done electronically, so the materials and written instructions are handed out through the Web page, but the activities are presented, explained, and begun in the classroom time. To take advantage of the digital formats, the submission has to be done through the platform working group tools and most of the management of the activities and communication between students and teachers is electronically done. So, in this experience, the technology is of great help but the initiative is mainly done with attendance. The technology cannot substitute the personal and social dimension of learning. The contents of the portfolio have become more sophisticated over the years, progressing from exercises in the application of theory to more elaborate pieces of work. Nowadays, the portfolio is made up of several different kinds of processes and products: preparing a piece of work on a certain aspect of the international economy (international economic institutions and their operation, the international monetary
0
system, trading relations, etc.), preparing and taking part in a debate on globalization, developing tasks associated with the theoretical contents of each topic, providing any complementary work which they consider will improve the contents of the portfolio, and so forth. Attending classes always counts for a certain percentage in the final assessment of the portfolio. The assessment system suggested involves the students in a teaching-learning process which presents three elements considered to be essential to achieve high-quality learning: generate students active learning implicating them in class activities and tasks, collaborative learning by using the work groups as the focal point for activities, and an alternative assessment process by gathering the learning products together in an electronically managed portfolio. So the process is active, collaborative, and supported by new technologies. Currently, the development of this experience allows different types of objectives to be achieved. On the one hand, the main objective of the innovation is using in practice a procedure of alternative assessment that stimulates active learning by the students, consisting of an electronic, collaborative portfolio with which they can pass the “International Economic Relations” subject through different levels of requirements and involvement. In addition, the following secondary objectives are set: rethinking the university teacher’s role as an assessor; revising the assessment process within the context of learning abilities, recognizing its difficulties and the need for alternative forms of assessment, considering and analysing the portfolio as one such possible form of alternative assessment. To assess the experience during different years we carried out a survey among the students, both at the beginning and at the end of the academic term. This was slightly modified during these years in order to adapt to changes in the project. The initial survey was designed to find out what
Evaluation and Effective Learning
prior intentions and expectations the students had with regard to the subject and the active practical work. The purpose of the final survey was to ascertain whether the level of interest in the subject had been maintained until it had been completed, whether students considered it important for their education, the degree of difficulty of the subject, the suitability of the bibliography, the students’ willingness to take part in further innovative teaching experiences, and a series of assessments of their collaborative work and its possibilities. Apart from the questions, the students were given the opportunity to indicate those aspects that could be improved upon in the future. The main observations were: there should be more practical groupwork, debates and greater opportunities to express opinions, though they are conscious of the difficulties they have in participating more in class (particularly orally); the practical work should be more closely linked to real-life situations and more up-to-date. They also suggested commenting on newspaper articles in class; better assessment of the work done, since it requires a great effort on their part, and better organization to reduce class sizes, and so forth. The results of this assessment were very satisfactory and that is the reason why the experiment has continued over several successive academic years (Hernández Nanclares 2004; 2006).
the PortFoLIo As A strAtegIc Instrument oF Assessment The question under consideration at this point in the development of the innovation is whether the system described here is being used strategically, in other words, whether it is encouraging significant and deep learning in my students. As has been discussed throughout the section, there is a margin to allow the assessment to be oriented towards the kind of work done by the students and the way in which they tackle the tasks.
To this end, the way to gain greater insight into this experience is to ask oneself whether this objective is really being achieved: is the right kind of learning being stimulated? If not, what kind of learning is being stimulated? How should the assessment be modified to achieve the desired results? (Gibbs & Simpson, 2003). A first approximation to help answer these questions is to analyse whether the activities and products which make up the “International Economic Relations” portfolio, in its present form, fulfil the conditions indicated by Gibbs et al. (2003) and presented in Table 1. So, in this section, a reflection over the strategic elements of the electronic portfolio of the IER subject is done and summarized in Table 2.
Quantity and distribution of student effort The first condition is related to the amount and distribution of the students’ effort. To ensure that they dedicate sufficient time to the subject, we developed a series of tasks to be spread out over the term. In the first place, they were asked to prepare reports on the international monetary system over a period of twelve weeks. This is a group project, but each member of the group is responsible for one particular report. One hour of class time per week, the hour designated “practical work,” is entirely devoted to the development and completion of this work. To prevent the students from concentrating all their effort at the end, just before the date for handing in the work and starting the Christmas holidays, we suggested to them a work plan with a detailed schedule fixing a series of tasks and convenient time periods for completing them. Each week the groups check to see how the work is progressing. Group meetings have been arranged for the end of November. In them, each student has to present to the group and to the teacher the scheme, the central ideas and the planning of the report before dedicating the last part of the time to writing it.
Evaluation and Effective Learning
Secondly, apart from this “medium-term” work, “short-term” tasks are decided upon. These are associated with the different topics being dealt with in class and add extra value to the portfolio. They are activities that have to be handed in within a short period and normally involve completing classwork or a short check of knowledge acquired. Some of them are voluntary, so excessive pressure is not put on the students. In general, they are the responsibility of the whole group but, in some cases, such as the knowledge test, they are carried out by the individual, though the average mark of the group is entered in the portfolio. Thirdly, a “long-term” task, presented at the start of the year but not carried out until January, has also been conceived. It involves preparing and taking part in a debate on globalization. With this organization of tasks we believe that the students’ work effort and time will be divided up evenly throughout the term and among the contents.
Quality and Level of Student Effort The second condition refers to the quality of effort made by the students. To ensure that their learning is of a suitable nature (significant, active, and collaborative), the activities have been devised in a way that means students will become involved in mental processes of a high cognitive level, with increasing degrees of difficulty. As an example, the reports must be produced within clearly set limits, with a well-defined analysis perspective. The key concepts to be covered are specified and the space is limited to three or four sheets so they have to concentrate in real relevant aspects of the topic. A very important role is played in this point by the way instructions are given. They are written, very clear from the start, and with indications as to how the elements have to be assesed. Moreover, the specific assessment criteria that will be taken into consideration during the correction work are at the students’ disposal. As already mentioned, the “short-term” tasks are related to the day-to-day dynamics of the
class. The basic structure followed in the class is to provide the students with materials that have to be processed inside and outside the class, consider question clusters and discuss the materials, work in a small group and present the results in a large one. Within this basic organization, the function of the teacher is to coordinate participations in class combined with some lectures. The bulk of the work here consists in preparing materials, producing the question clusters, and presenting the topic in a way that will prove to be of interest and will make the students want to know more about it, thus motivating them to prepare the materials and consult the bibliography. This kind of work allows the students to investigate their previous concepts about a particular question, share them with their group and from there, through discussion and participation, acquire new knowledge that permits them to advance. It also forces them to develop new abilities and skills related to communication, teamwork, and comprehension and analysis. The tasks then allow them to complete the process and present it in the form of a product. They have therefore been asked to produce a concept map on globalization, present written answers to the question clusters of the material relating to international capital markets, answer a short test on the basic theory of exchange rates and produce another concept map on the reform of the International Monetary Fund. We believe the tasks to be sufficiently demanding and challenging to involve the students in productive learning activities. The fact that they are performed in a work group means that the knowledge acquired is much richer, the levels of difficulty of the tasks increasing as the groups develop their own dynamics and become more consolidated.
the role of correction and Feedback The rest of the conditions refer to feedback. On the one hand, the quantity and quality of corrections and the opportunity the teacher has of
Evaluation and Effective Learning
making them and, on the other, the spirit in which the students receive them and their effects on the students’ future learning. As regards the amount and opportunity of feedback, we, the teachers, try to make corrections as quickly and opportunely as possible. For this purpose, we utilize the ICT’s and the virtual platform of the subject to speed up communication. We also carry out a check of the messages that the students send to their forums in order to detect problems, imbalances and erroneous concepts which we attempt to rectify. We make suggestions to them and increase the bibliography references. The forums allow us to detect gaps that may have remained, thus providing the opportunity to complete the information in another class or with messages to the general forum to which all the students have access. To achieve a quality that endows the corrections with a true formative character, we distinguish between two assessment dimensions. Firstly, we assess the quality of the learning process, maintaining a constant observation of the development of the groups. Three instruments are basically used: direct observation of the group activity during attendance hours, a check of the messages in the group forums, and the opinions of the students themselves gathered directly during classes and in the group meeting. In this way we aim to detect possible malfunctions in the development of the group and solve problems of relationships and understanding before the end of the term. Second, the intention is to offer the students useful corrections of the learning products. Normally, the objectives, contents, and assessment criteria of the tasks are clearly established and are known to the students. Thus, the comments refer to previously described variable, which enables students to assess the corrections. Moreover, the assessments are qualitative, which as far as possible avoids giving a final, numerical mark. For example, in the case of the concept map, the students were sent a comment written together
with the criteria explained, the work of all the groups was presented in class, and the students were encouraged to look at other students’ work and compare it with their own. This allowed them to become aware of the different levels of quality that existed and of how other groups had gone about planning and presenting the work. They therefore gain an idea of what is involved in the search for quality in the work and can begin the self-assessment process. As for the knowledge test, the correction was carried out in pairs. The teacher resolved the questions in class and, at the same time, each student checked an anonymous colleague’s exam. Different levels of quality were fixed, depending on the completeness of the answers, and students were asked to indicate this level in the exercise they corrected. This makes them aware of what it means to assess somebody else’s work, learn from their own and other people’s mistakes and adheres to quality criteria that can reasonably be expected. The last of the conditions established in Table 1 refers to the spirit in which students accept the corrections and how these affect their future learning. To ensure the students receive the corrections and incorporate them in the learning processes, we follow two guidelines: to carry out two or more short tasks of the same kind and to check each week the progress of the longer tasks. Two concept maps are therefore produced: the first, at the start of the academic year, with precise instructions, exhaustive corrections, and the exhibition of work. The second during the Christmas period to check whether the students are incorporating the improvements indicated in the previous production. In addition, the questions and issues of the voluntary tasks and the knowledge check are so conceived that the same models, concepts and theories are considered from different points of view. For their part, the medium-term reports are checked weekly. The work plan is checked and the work is completed at the group meeting.
Evaluation and Effective Learning
Table 2. Is the REI electronic portfolio strategically used? Assessment conditions
Activities and products of the REI portfolio
1.Quantity and distribution of student effort
Design of tasks with different hand-in periods “Short-term” tasks: Associated with class activities, responsibility of the group; some are voluntary “Medium-term” tasks: Preparation of reports; work plan with detailed schedule; at least one group meeting; weekly check during practical class “Long-term” tasks: globalization debate; maintaining effort at the end of the term, examining in greater depth ideas already dealt with
2.Quality and level of student effort
Design of tasks in which conceptual objectives are combined with abilities and skills “Short-term” tasks: Knowledge and analysis of and relationship between concepts, preparation of contents starting out from previous personal and group knowledge, discussion and justification of different opinions, reading comprehension in different languages “Medium-term” tasks: Search for, selection and integration of information, taking decisions on how to organise the different phases, written communication “Long-term” tasks: Development of oral communication skills; consolidation of teamwork capacity; defence of postures through technical arguments
3.Quantity and timing of feedback
Returning correction within a reasonable period of time in order to maintain both the teacher’s and the student’s interest in the task and its results Use of electronic means of communication available on the subject Web page
4.Quality of feedback
Preparation of written corrections in accordance with the following ideas: Start out from clear and familiar assessment criteria Focus the correction on what the student has done, being positive; highlight mistakes but also things done well Place special emphasis on ways of improvement for future tasks Make the students aware of the minimum quality criteria required in the technical work and in the quality of their own work Carrying out individual and group self-assessment and assessment in pairs
5.Student response to feedback
A qualitative assessment, without a final mark, to ensure attention is paid to the formative aspect. Repetition of tasks in the same style to incorporate corrections made and past learning Incorporation of individual assessments in the group portfolio to encourage responsibility and a desire for quality Accumulative assessment of the portfolio with the option of exemption from the exam, which encourages self-motivation, continuous effort and the capacity to extend the reward for work done.
The entire foregoing analysis indicates that the assessment system conceived is strategic in character and can help the student to generate suitable learning. At all events, the definitive check would be to ask the students. For this purpose, during the 2006-07 academic year, we modified the questionnaires with which the innovation is assessed in order to try to detect these aspects. In this new form of assessing the experience we used the “Assessment Experience Questionnaire” developed by Brown, Gibbs, and Glover (2003) and Gibbs and Simpson (2004). We also tried
to improve the part relating to the collaborative work assessment and the usefulness of the subject page on the virtual platform, Aulanet. The questionnaire is given to students at the end of the semester, in January, before they get their final qualifications. So, the answers are reliable because they have just finished the term and are not yet disappointed by the summative assessment. At this point of the research, we have not yet results from this first evaluation of the innovation. We cannot compare it with the previous years results because the questions and evaluation tool is totally
Evaluation and Effective Learning
changed. The previous experience has been useful for design the innovation but, in same sense, from now on it is completely new one. The previous experience has been useful in designing the current innovation but, the one described here is a completely new one. The basic differences are how the experience is evaluated and what the analysis of the results and their implications are.
concLusIon The great challenge university teachers have nowadays is to promote adequate learning in students. Clearly, the kind of learning we want to motivate is an effective and significant one in which the students build their own knowledge, alone or in a group, and in an active manner in order to acquire conceptual, procedural, and behavioural knowledge. One of the most important tools we have to reach this aim is evaluation. There is an important margin to allow the teachers to design the assessment in a strategic manner and modify the nature of the students’ learning activities. The assessment can be conceived as a system of incentives aimed at ensuring that the students adopt specific forms of behaviour directed towards achieving certain types of learning. The assessment becomes a central teaching element, closely related to the learning objectives and decisive in the choice of contents and the establishing of tasks. To achieve this type of learning, the assessment must fulfil several conditions that relate to the students’ effort, the feedback from the teacher, and how the students respond to this feedback. Having all this in mind, the central question of this chapter is whether the innovation developed in the subject “International Economic Relations,” already described here, is being used strategically. In other words, is really being achieved the objective of this experience? Is significant and deep learning being stimulated? If not, what kind of learning is being stimulated? How should the assessment be modified to achieve the desired
results? To help answer all this questions we have analysed whether the activities and products which make up the “International Economic Relations” portfolio, in its present form, fulfil the conditions that characterised a strategic evaluation. After the review, we can conclude that the assessment system conceived and practiced in the experience described is strategic in character and can help the student to generate suitable learning. Of course, the experience could be improved, and for that, and as a result, the opinion of students has to be taken in account. Future versions of the REI electronic portfolio would be modified in light of the results of the questionnaire with which we are evaluating the experience.
Future reseArch And dIrectIons The research I have begun this course 2006-07 is based in the “Assessment Experience Questionnaire” developed by Brown et al. (2003) and Gibbs and Simpson (2004) and it has a specific objective: analysing the assessment tool used in the view of its strategic capacity to generate efficient learning. The future research will be oriented in several directions: •
•
First, I want to refine the assessment tool. This means to gain deeper knowledge about the portfolio technique to evaluate in the university. So, analyzing more the theoretical aspects and the previous experiences would be the field of research. Secondly, I want to analyse the results of the questionnaire and change those aspects of the assessment that not contribute to efficient learning. This implies using again next course the portfolio approach and collecting more data about it, so comparisons could be done. Probably, the analysis of results requires us to rethink or redefine the differ-
Evaluation and Effective Learning
•
ent activities which are part of the portfolio. Perhaps more effort has to be put in task related with capacities and competencies than in conceptual questions. Third, I would like, if possible, to compare my results with other experiences using the same questionnaire a different context: careers, specialities, uiversities, and so forth.
reFerences Agra, M.J., Gewerc, A., & Montero, M.L. (2002). El portafolios como herramienta de análisis en experiencias de formación online y presenciales II Congreso Europeo de Tecnologías de la Información en la Educación y en la Ciudadanía. Barcelona. Barragán Sánchez, R. (2005). El Portafolio, metodología de evaluación y aprendizaje de cara al nuevo Espacio Europeo de Educación Superior. Una experiencia práctica en la Universidad de Sevilla, Revista Latinoamericana deTecnología Educativa, 4(1), 121-139. Retrieved October 28, 2007, from http://www.unex.es/didactica/RELATEC/sumario_4_1.htm Brown, E., Gibbs, G., & Glover, C. (2003). Evaluation tools for investigating the impact of assessment regimes on student learning. Bioscience Education E-Journal, 2. Retrieved October 28, 2007, from http://bio.ltsn.ac.uk/journal/vol2/beej2-5.htm Brown, S., & Glasner, A. (Ed.). (1999). Assessment matters in higher education. UK: Open University Press. Challis, D. (2005, Fall). Towards the mature eportfolio: Some implications for higher education. Canadian Journal of Learning and Technology, 31(3).
Cooper, T. (1996). Portfolio assessment in higher education. In Proceedings Western Australia Institute for Educational Research Forum 1996. Retrieved October 28, 2007, from http://www. waier.org.au/forums/1996/cooper.html Feuerstein, R. (1990). The theory of structural cognitive modifiability. In B.Z. Presseisen (Ed.), Learning and thinking styles: Classroom applications (pp. 68-134). Washington, DC: National Education Association. Feuerstein, R., Rand, Y., Hoffman, M. B, & Miller, R. (1980). Instrumental enrichment. Baltimore, MD: University Park Press. Gibbs, G., & Simpson, C. (2004). Measuring the response of students to assessment: The assessment experience questionnaire. In C. Rust (Ed.), Improving student learning: Theory, research and scholarship. Oxford: Oxford Centre for Staff and Learning Development. Gibbs, G., & Simpson, C. (2003). Does your assessment support your students’ learning? Journal of Learning and Teaching in Higher Education, 1(1). Gibbs, G., & Simpson, C. (2005). Conditions under which assessment supports students’ learning. Learning and Teaching in Higher Education 1, 3-31. Gibbs, G., Simpson, C., & Macdonald, R. (2003). Improving student learning through changing assessment—A conceptual and practical framework. Paper presented at the European Association for Research into Learning and Instruction Conference, Padova, Italy. Hernández Nanclares, N. (2004). La evaluación mediante portafolio en Relaciones Económicas Internacionales. In R. Rodríguez, J. Hernández & S. Fernández (Eds.), Docencia universitaria: Orientaciones para la formación del profesorado (pp. 331-341). Documentos ICE, Instituto de Ciencias de la Educación, Universidad de Oviedo.
Evaluation and Effective Learning
Hernández Nanclares, N. (2006) El portafolios electrónico: Una alternativa para evaluar en la Universidad. Paper presented in I jornadas de innovación educativa de la Escuela Politécnica Superior de Zamora, junio Zamora, España. Johnson-Bogart, K. (1995). Writing portfolios: What teachers learn from students self-assessment. In Washington Centre’s Evaluation Committee (Ed.), Assessment in and of collaborative learning. Washington: Washington Centre for Improving the Quality of Undergraduate Education.
AddItIonAL reAdIng Baron, C. (1996). Creating a digital portfolio. Indianapolis, IN: Hayden Books Barret, H. (2000). Create your own electronic portfolio. Learning & Leading with Technology, 27(7), 14-21. Barrett, H. (2000). The electronic portfolio development process. Retrieved October 28, 2007, from http://electronicportfolios.org/portfolios/EPDevProcess.html
Klenowski, V. (2002). Developing portfolios for learning and assessment: Processes and principles. London: RoutledgeFalmer.
Barrett, H. (2005). Research electronic portfolios and learner engagement. White Paper. Retrieved October 28, 2007, from http://www.taskstream. com/reflect/whitepaper.pdf
Klenowski, V., Askew, S., & Carnell, E. (2006). Portfolios for learning, assessment and professional development. Assessment and Evaluation in Higher Education, 31(3), 267-286.
Bates, A., & Poole, G. (2003). Effective teaching with technology in higher education. San Francisco: Jossey-Bass.
Miller, C.M.I., & Parlett, M. (1974). Up to the mark: A study of the examination game. Guildford: Society for Research into Higher Education. Sambell, K., & Mcdowell, L. (1998). The construction of the hidden curriculum: Messages and meanings in the assessment of student learning. Assessment and Evaluation in Higher Education, 23(4), 391-402. Snyder, B.R. (1971). The hidden curriculum. Cambridge, MA: MIT Press. Spillane, M.G. (1999, June). Portfolio assessment in higher education: Seeking credibility on the campus. Journal of the National Institute of the Assessment of Experiential Learning, 17-28. Williams, S.C., Davis, M.L., Metcalf, D., & Covington, V.M. (2003). The evolution of a process portfolio as an assessment system in a teacher education program. Current Issues in Education, 6(1). Retrieved October 28, 2007, from http://cie. ed.asu.edu/volume6/number1/
Belait, L. (2001). La evaluación de la acción. El dossier progresivo de los alumnos. Diada editoras. Benito, A., & Cruz, A. (Eds.). (2005). Nuevas claves para la docencia universitaria en el Espacio Europeo de Educación Superior. Madrid: Narcea. Bird, T. (1997). El portafolio del profesor: Un ensayo sobre las posibilidades. In J. Millman & L. Darling Hammond (Eds.), Manual para la evaluación del profesorado (pp. 332-351), Madrid: La Muralla. Bitter, G., & Pierson, M. (2005). Using technology in the classroom (6th ed.). Boston: Pearson. Brookheart, S.M. (2001) Successful students’ formative and summative uses of assessment information. Assessment and Evaluation in Higher Education, 8(2), 154-169. Bullock, A.A., & Hawk, P. (2001). Developing a teaching portfolio: A guide for preservice and practicing teachers. Upper Saddle River, NJ: Prentice-Hall.
Evaluation and Effective Learning
Campbell, D., Cignetti, P., Melenyzer, B., Nettles, D., & Wyman, R. (1997). How to develop a professional portfolio: A manual for teachers. California University of Pennsylvania. Campbell, Melenyzer, Nettles, & Wyman (2000). Portfolio and performance assessment in teacher education. Boston: Allyn & Bacon. Cano, E. (2005). El portafolios del profesorado universitario. Un instrumento para la evaluación del desarrollo profesional. Barcelona: Octaedro/ ICE-UB. Carless, D.M. (2002). The mini-viva as a tool to enhance assessment for learning. Assessment and Evaluation in Higher Education, 27(4), 353-363. Carney, J. (2005). What kind of electronic portfolio research do we need? Retrieved October 28, 2007, from it.wce.wwu.edu/carney/Presentations/SITE05/ResearchWeNeed.pdf Casado Ortiz, R. (2006). Convergencia con Europa y cambio en la universidad. Los profesores y las nuevas tecnologías como elementos clave en el nuevo modelo de aprendizaje del Espacio Europeo de Educación Superior. Edutec. Revista Electrónica de Tecnología Educativa. Castillo, S. (2004). Use y proporcione retroacción. In L.M. Villar (Coord.). (2004). Programa para la Mejora de la Docencia Universitaria. Madrid: Pearson/Prentice Hall. Greer, L. (2001). Does changing the method of assessment of a module improve the performance of a student? Assessment and Evaluation in Higher Education, 26(2), 128-138. Hebert, E. (2001). The power of portfolios: What children can teach us about learning and assessment. Jossey-Bass.
Lyons, N. (Ed.). (1998). With portfolio in hand: Validating the new teacher professionalism. New York: Teachers College Press. Martin-Kniep, G. (1998). Why am I doing this? Purposeful teaching through portfolio assessment. Portsmouth: Heinemann. McLaughlin, M., & Vogt, M. (1996). Portfolios in teacher education. Newark, NJ: International Reading Association. McLaughlin, M., Vogt, M. E., Anderson, J. A., DuMez, J., Peter, M. G., & Hunter, A. (1998). Professional portfolio models: Reflections across the teaching profession. Norwood, MA: Christopher-Gordon Publishers. Murphy, P. (2003). E-portfolios: Collections of student work move from paper to pixels. TLTC News, University of California. Retrieved October 28, 2007, from http://www.uctltc.org/news/2003/02/ feature.html Pozuelos, F. J. (2003). La carpeta de trabajos. Una propuesta para compartir la evaluación en el aula. Cooperación Educativa. Kikirikí. 71/72, 37-43. Tosh, D., & Werdmuller, B. (2004). E-portfolios and Weblogs: One vision for ePortfolio development. Retrieved October 28, 2007, from http:// eduspaces.net/dtosh/files/7371/16864/ePortfolio_Weblog.pdf Vandervelde, J.M. (2004). A+ rubric: Rubric for electronic portfolio. Retrieved October 28, 2007, from http://www.uwstout.edu/soe/profdev/eportfoliorubric.html Wiske, M.A. (2005). Teaching for understanding with technology. San Francisco: Jossey-Bass. Yorke, M. (2001). Formative assessment and its relevance to retention. Higher Education Research and Development, 20(2), 115-126.
Chapter XVI
Formative Online Assessment in E-Learning Izaskun Ibabe University of the Basque Country, Spain Joana Jauregizar Quality Evaluation and Certification Agency of the Basque University System, Spain
ABstrAct This chapter provides an introduction to formative assessment, especially applied within an online or e-learning environment. The characteristics of four strategies of online formative assessment currently most widely used—online adaptive assessment, online self-assessment, online collaborative assessment, and portfolio—are described. References are made throughout recent research about the effectiveness of online formative assessment for optimizing students’ learning. A case study in which a computer-assisted assessment tool was used to design and apply self-assessment exercises is presented. The chapter emphasizes the idea that all type of assessment needs to be conceptualized as “assessment for learning.” Practical advices are detailed for the planning, development, implementation, and review of quality formative online assessment.
IntroductIon Assessment should be the first step in educational design (Stiggins, 1987). A significant body of research supports the view that the design of assessment is critical in determining the direction of student effort, and e-learning is no exception to this (see Black & William, 1998). Furthermore,
when there is alignment between what teachers want to teach, how they teach, and how they assess, teaching is likely to be more effective (Sluijsmans, Prins, & Martens, 2006). E-learning can be defined as the use of digital technologies and media to deliver, support, and enhance teaching, learning, assessment, and evaluation (Armitage & O’Leary, 2003). Never-
Copyright © 2008, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
Formative Online Assessment in E-Learning
theless, it is not essential for the assessment of e-learning to be online, even if is often appropriate, for example, when rapid feedback is required on progress and achievement testing (Macdonald, 2004). In any case, in this chapter, we focus on the formative power of online assessment. Formative assessment is associated with considerable improvement in students’ performance, though frequent testing and reporting of scores can be prejudicial to weaker students. Institutions are increasingly turning to information and communication technologies (ICTs) to plan their teaching, learning, and assessment tasks. In e-learning environments it is necessary to develop assessment models appropriate for the object of assessment and the different contexts involved. By comparison with the use of computers to aid student learning, computer-assisted assessment (CAA) is a relatively new incorporation. In the assessment process, it is vital to provide a channel of communication between students and their mentors, and modern technologies offer many opportunities for innovation in educational assessment, through rich new assessment tasks and potentially powerful scoring, reporting, and real-time feedback mechanisms for use by teachers and students (Scalise & Gifford, 2006). Computer-based platforms permit high-quality formative assessments that can fit closely with instructional activities and goals, as well as contributing to e-learning assessment. Therefore, the purposes of this chapter are: •
•
0
To review the concept of “formative assessment” and its development over time, in relation to other concepts such as “summative assessment” or “assessment for learning.” Computer-provided feedback, as an essential part of formative online assessment, will be addressed, considering its advantages and offering practical advice for its design. To clarify the current confusion of terms in relation to “online assessment” and consider its benefits, limitations, and effectiveness.
•
•
•
•
To provide guidance, from a pedagogical point of view, on the design of questions or activities for formative assessment in e-learning environments, and explore a 28-item categorization in order to reveal the broad potential of online assessment. To illustrate the four strategies of online formative assessment currently most widely used—online adaptive assessment, online self-assessment, online collaborative assessment, and portfolio—which are not mutually exclusive, and review recent research on their effectiveness for optimizing students’ learning. To present a case study about an innovative teaching experience in which a computerassisted assessment tool (Hot Potatoes) was used to design and apply self-assessment exercises in higher education. To conclude by providing practical advice for the planning, development, implementation, and review of quality formative online assessment.
the concePt oF FormAtIve Assessment Formative vs. Summative Assessment There is considerable debate within the higher educational community about assessment issues, given the proliferation of online classrooms and the emphasis on constructivist approaches to learning. Constructivist learning paradigms are learner-centred, and posit that learning occurs when students are actively engaged in making sense of phenomena as well as constructing and negotiating meanings with others (for an extensive review and analysis of this literature, see Comeaux, 2002). Recently, the Organization for Economic Cooperation and Development (OECD) lent its
Formative Online Assessment in E-Learning
support to formative assessment as a powerful learning tool: Teachers using formative assessment approaches guide students toward development of their own learning to learn skills that are increasingly necessary as knowledge is quickly outdated in the information society. (OECD, 2005, p. 22) Many teachers and researchers seem to have misunderstood the distinction between the terms “evaluation” and “assessment.” Rowntree (1982) makes three points in relation to these terms: 1.
2.
3.
Evaluation and assessment, though often used as synonyms, refer to different levels of investigation. Evaluation is concerned with the macro or holistic level of the learning event, taking into account the context of learning and all the factors that go with it, while assessment can be seen as the measurement of student learning, and is one of the elements involved in evaluation, at the micro level. One aspect of any sound evaluation is allowance for the unexpected. Above all, an evaluation is a designed and purposeful enquiry that is open to discussion.
The concept of formative assessment has changed over time. Scriven (1967, pp. 40-43) used the terms “formative” and “summative” evaluation to differentiate between the two roles evaluation may play in education. Using evaluation in the development or improvement of some educational process is “formative.” Using evaluation in decision-making about the end result of an educational process is “summative.” Scriven’s (1967) argument was focused on different functions for curriculum evaluation, rather than on the assessment of student learning. The Assessment Reform Group distinguished between “formative assessment” and “assessment for learning,” although some authors used the
terms synonymously. The term “formative” itself is open to a variety of interpretations, and often means no more than that assessment is carried out frequently and is planned at the same time as teaching. Such assessment does not necessarily have all the characteristics just identified as helping learning. It may be formative in helping the teacher to identify areas where more explanation or practice is needed, but for the pupils, the marks or comments on their work, though indicating their degree of success or failure, may not tell them how to make progress towards further learning (Assessment Reform Group, 1999, p. 7). In the literature on elaborate forms of formative assessment (i.e., “assessment for learning”) there emerged a tendency to highlight the positive effects of formative assessment and the negative effects of summative assessment. The general disparagement of summative assessment is perhaps understandable given what Biggs (1998) described as its negative “backwash” effects. “Backwash effect” is a vicious cycle referring to the influence of testing on teaching and learning, the result of high-stake examinations. People try to minimize the effort required in their activities by adjusting their working process (learning) to maximize the outcome (examination grades, ease of passing the exam, etc.), rather than focusing on the quality or intrinsic interest of their work (learning). Biggs (1998) suggests that backwash can have a positive result when assessment tasks are deliberately and firmly referenced to learning standards contained in the curriculum. The distinction between formative and summative assessment has resulted in some excellent research and development work on formative assessment. Nowadays, some researchers tend towards a more inclusive model of assessment. For example, Kennedy, Sang, Wai-ming, and Fok (2006) point out that a model of assessment should have the following characteristics: 1.
All assessment needs to be conceptualized as “assessment for learning.”
Formative Online Assessment in E-Learning
2. 3.
4.
Feedback needs to be seen as a key function for all forms of assessment. Teachers need to be seen as playing an important role not only in relation to formative assessment but in all forms of summative assessment as well—both internal and external. Decisions about assessment need to be viewed in a social context, since in the end they need to be acceptable to the community.
Verification is the simple judgment as to whether an answer is correct or incorrect, while elaboration is the informational component providing relevant cues for guiding the learner toward a correct answer. Most researchers now share the view that successful feedback (feedback that facilitates the greatest gains in learning) must include both verification and elaboration. However, there are also considerable differences in types of elaboration (Mason & Brunning, 2000): a.
At the heart of formative assessment is the feedback provided to students by the teacher. The iterative nature of evaluation should help to make the learning experience both more efficient and more effective, as the feedback is used to produce continuous improvement (Crompton, 1999).
b.
Informational elaboration provides a framework of relevant information from which the correct answer can be drawn. Topic-specific elaboration provides more specific information about the target question or topic and leads the learner through the correct answer, though it does not address incorrect responses. Response-specific elaboration addresses both the correct answer and incorrect response choices; if a learner selects an incorrect response, response-specific feedback explains why the selected response is incorrect and provides information about what the correct answer should be.
Computer-Provided Feedback
c.
In general terms, feedback is any message generated in response to a learner’s action. As explained before, feedback is an essential part of formative assessment, and “formative feedback” would constitute additional information used by students to improve their performance, insofar as it provides hints and tips, points to the resources needed, reinforces the learning of concepts, and provides information on the following steps to be taken (Schulze & O’Keefe, 2002). Students must understand their learning goal and be able to compare their current performance with their desired performance; they must also have the ability to act in such a way as to close the gap (Gipps, 1994; Sadler, 1998), so that the informational feedback becomes intrinsically motivating for them (Covington, 1992; Pintrich & Schrauben, 1992). Teaching students to monitor their own performance is the ultimate goal of providing feedback (Sadler, 1989). According to Kulhavy and Stock (1989), effective feedback provides the learner with two types of information: verification and elaboration.
Computer-provided feedback has several advantages, such as the possibility to repeat feedback as many times as students want, and the unbiased, accurate, and nonjudgemental information provided by the computer, irrespective of student characteristics or the nature of the student response (Mason & Brunning, 2000). Designers of computer-provided feedback should consider many aspects, including: when and where to provide feedback, how much feedback to give, what kind of feedback is needed and the level of feedback. Schulze and O’Keefe (2002) make interesting recommendations about these aspects. With regard to when to provide feedback, if it is given at the end of the exercise, the assessment should be short, because nothing is more frustrating than completing a long activity
Formative Online Assessment in E-Learning
only to find out at the end that everything was done incorrectly. When the exercises are long, feedback should be given throughout the assessment, so that students can identify their errors and achievements. This is directly related to the levels of feedback. The concept of feedback levels comes into play when learners attempt part of an activity once and obtain general feedback that they are not quite correct and should keep working; they then receive more and more specific feedback until they complete the exercise correctly. In complex exercises, learners may not need to be told the correct strategy at their first attempt, and feedback should be given gradually. However, in less complex exercises, such as multiple-choice tests, learners benefit more from clear and detailed feedback about their answers at every attempt. As for the kind of feedback in terms of marks, Black and William (1998) summarized research evidence from 250 articles and chapters and noted that: Feedback has been shown to improve learning when it gives each pupil specific guidance on strengths and weaknesses, preferably without any overall marks. Thus, the way in which test results are reported to pupils so that they can identify their own strengths and weaknesses is critical. (Black & William, 1998, p. 144) In terms of where to place feedback, designers should take into account where the learner may struggle and where the critical “chunks” of content are, and then provide feedback in those places.
onLIne Assessment Definition There is some confusion among similar terms, such as “computer-based assessment,” “computer-aided assessment,” “online assessment,”
or “Web-based assessment,” and such terms are indeed often synonymously, even though they do not all mean the same. Recently, the term “e-assessment” has been used to mean assessment in elearning environments or the (electronic) process by which the learner’s progress and understanding is assessed (BECTA, 2006), although the term has more frequently been applied to the projects of great scale where abilities, competences, aptitudes, and personality are assessed. Computer-based assessment and computeraided assessment are the most widely used terms, and refer to the use of a computer for viewing items and responding to them. The software managing the assessment may be on the individual computer or on one connected over a network, local or otherwise. When the software access comes from a network, with current assessment solutions often being delivered from a local or distant server, the proper term is online assessment. Web-based assessments are online assessments that can be delivered over the World Wide Web or a local area network. This type of assessment permits instructors to receive feedback (answers, results, errors, or comments) from students.
advantages and Disadvantages of online Assessment We can identify advantages and disadvantages of online assessment for the institution, for the students and for the teachers. The most important disadvantage for the institution would be the “cost” of setting up the system at the beginning, though this would be compensated by the subsequent savings of time and money, and reduction of the administrative burden if the system is properly used. The advantages for students are numerous (Collis, De-Boer, & Slotman, 2001; Lowry, 2005; Plous, 2000; Sherman, 1998; Ward & Newlands, 1998): immediate feedback, guided effort, diagnosis of problems in learning, a more flexible pace of learning, reaching and motivating a large and
Formative Online Assessment in E-Learning
diverse set of respondents, gaining experience in assessment methods, freedom from restrictions of time and place of assessment, and so on. The advantages increase if online assessment is adapted to the students’ ability (adaptive online assessment), or if it is adapted to student learning styles (Clariana, 1997). Online assessment, used as self-assessment, can help students monitor their own progress, making it an important tool of formative assessment (Ibabe & Jauregizar, 2005). Students should be trained to become accustomed to the online assessment tool, so that the assessment methodology does not obstruct performance. Stress or anxiety caused by inexperience in a computer-based system may be a disadvantage of online assessment. However, research comparing performance using computer and paper-based multiple-choice tests (Lee & Weerakon, 2001) has demonstrated that there is no measurable effect. Even so, Zakrzewski and Bull (1999) suggest that student anxiety can be reduced if they take formative assessment before summative tests. Teachers should also be trained to master software so as to enable efficient delivery of the assessment, which requires a “cultural shift” to invest time in designing new assessments rather than in traditional “marking” assessments (Bull, 1999). In any case, universities are facing an important “academic shift” with the development of the European Higher Education Area, and the use of ICTs will be crucial in adapting to that challenge. As Macdonald (2004) notes, online feedback can be given not only to individuals, but also to a whole tutorial group, forming the basis for online collaborative assessment. Moreover, computer-based assessment provides focus and timely feedback not only to students, but also to teachers, who can identify the gaps in their students’ knowledge or the questions that have not been adequately understood in class. Thus, teachers can give constructive and detailed feedback to
every student, a task that would otherwise be too arduous. The time saving advantages of electronic marking are unquestionable (a wide range of topics and large groups can be assessed quickly, and results can be entered automatically into an administration system so that students receive their marks rapidly), but these advantages need to be offset against the time invested in writing challenging and effective questions, meaningful feedback and structuring appropriate tests (Bull, 1999). Although, as James, McInnis, and Devlin (2002, p. 24) point out, the design of online examinations is likely to require more time and effort than conventional pen and paper examinations, these authors also recognized that computers offer the potential to present students with more complex scenarios through the use of interactive resources (images, sound, or simulation). Some authors have expressed their fear about the “superficial” type learning that online assessment can generate (Ryan, 2000). The concern is that online assessment would be designed for assessment tasks only involving memorization and recall. Indeed, using the technology for assessment involving higher-level cognitive skills, including the application of analysis and synthesis, is a great challenge (Hyde, Booth, & Wilson, 2003), but work is already in progress on the development of these kind of assessment exercises in the online context.
Effectiveness of online Assessment Many studies have indicated that integrating the e-learning environment with online assessment has positive results (Buchanan, 2000; Henly, 2003; Velan, Killen, Dziegielewski, & Kumar, 2002). Buchanan (2000) showed that a Web-based formative assessment strategy is able to improve student learning interest and student scores. He argued that the “repeat the test” strategy (giving more opportunities for becoming familiar with learning materials) is an important element in Web-based
Formative Online Assessment in E-Learning
formative assessment design. However, he noted that this strategy should be implemented in conjunction with the functions of “provide with no answer” (pushing them to make clear what they did not understand) and “instant feedback,” so that the Web-based formative assessment is more beneficial. For such feedback to be effective it needs to be provided early in the learning process (Brown & Knight, 1994) and to offer guidance for improving performance (William & Black, 1996, p. 543). In the study by Wang, Wang, Wang, and Huang (2006), performance in the multiplechoice Web-based formative assessment group with six strategies, was significantly better than those in both the partial Web-based formative assessment strategy group and the Paper and Pencil Test group. This finding suggests that the more the different formative assessment strategies incorporated in the e-learning environment, the greater the learning effect obtained by students. The Web-based formative assessment condition contained six strategies: 1.
2.
3.
4.
5.
“Repeat the test”: This strategy allows students to take the same test item repeatedly if they make errors on it. However, if they pass one test item correctly three times, then the item will be deleted automatically. “Provide with no answer”: This strategy shows students incorrect answers they made without offering the correct answer. However, it also allows students leave the module to find correct answers in their own way. “Ask questions”: This strategy allows students to send questions to the teacher by e-mail. “Query scores”: This strategy provides an interface for students to make queries about peer and personal scores. “Monitor answering history”: This strategy provides an interface for students to check their personal answer history for each item.
6.
“All pass and then reward”: This strategy rewards students with a Flash animation when they pass all the test items.
desIgn oF ActIvItIes For FormAtIve onLIne Assessment The factors that influence the choice and design of online assessment methods include the learners’ needs, their access to technology, the available resources and, to some extent, the discipline or industry area. Currently, technology offers many new opportunities for innovation in educational assessment through rich new assessment tasks and potentially powerful scoring, reporting, and real-time feedback mechanisms. Through these and other technological innovations, the computer-based platform offers the potential for high quality formative assessment that can closely match instructional activities and goals, and make meaningful contributions to e-learning or summative tests. It is important to examine what is to be learned and assessed, in order to identify appropriate methods for demonstrating these skills. A potential limitation of computer-based assessment resides in the design of questions and tasks with which computers can effectively interact, including scoring and score reporting.
recommendations for creating an online Assessment tool Davis and Morrow (2004) suggest five questions for creating an assessment tool: 1.
What is it we want to measure? We need to broadly define what it is we want to measure. Specifically, we need to identify the construct of interest. Most often, the topic of interest will fall into one of two categories:
Formative Online Assessment in E-Learning
•
•
• •
2.
• • •
3.
A type of cognitive achievement, either a knowledge or skill (e.g., maths skills or knowledge of Spanish history). A type of affective trait (e.g., motivation, interest in maths). The concept of achievement can be broken down into knowledge and skills. Tests of knowledge measure an individual’s understanding and comprehension of facts, concepts, and principles. Tests of skills involve testing the application of this knowledge to problems or situations (Haladyna, 1997). An example of this distinction would be: Knowledge item: “What is the difference between a median and a mean?” Skill item: “Given the set of test scores of 55, 89, 74, 68, 92, 73, 85, and 66, compute the mean.” Why are we developing an instrument? There are several reasons why instruments measuring achievement are created: To assess learning from a particular course or subject area. To assess the effectiveness or outcome of a program. To assess level of student knowledge in relation to a particular competence. In addition, the primary reason for the development of an instrument may be its use as a tool for formative assessment. It is important to define the instrument’s purpose in order to justify the time and effort that will be put into the process. How do we want to measure this construct? The commonest type of instrument is a selected response format. This format is relatively easy to administer and easy to score. Despite the benefits of the selected response format, many researchers are exploring the option of performance assessment. Some common examples of performance assessments include having students responding to an essay prompt, playing a piece of music
4.
• • 5.
on an instrument, or performing a science experiment in front of a group of raters. The use of computers in test administration has led to the development of adaptive tests, as well as alternative item types with audio or video files, or items that allow for Internet searches. Who will be taking the test? Defining the target population of a test is extremely important at the outset of instrument development, and for several reasons. If a test is made available for public use, the intended set of respondents must be identified. Just as you would not give medicine for an adult to a young child, it is wrong to give a test designed for one population to members of another. Examples of welldefined populations would be: An assessment of Spanish history knowledge designed for college students. An assessment of self-confidence for highschool seniors. What are the conditions of measurement? We need to describe how the test will be used. It is important to consider the time and effort involved in developing and scoring the test. With online testing there arise other problems, such as security, cheating, time limits and examinee anxiety. Thus, it is essential to emphasize the need to use a range of methods for collecting evidence of valid assessment.
type of Questions and tasks for e-Learning Assessment Formative assessment refers to those activities that are used to help students learn. These types of activities include short tests and quizzes, question, and answers in the lesson, assignments, homework, and so on. Questions, tasks, activities, and other methods of eliciting student responses are often called “items” in the assessment process. One organizational scheme describes innovative
Formative Online Assessment in E-Learning
features for computer-administered items, such as the technological enhancements of sound, graphics, animation, video, or other new media incorporated into the item stem, response options, or both (Parshall, Davey, & Pashley, 2000). For some innovative formats, students can, for instance, click on graphics, drag or move objects, re-order a series of statements or pictures, or construct a graph or other representation. Innovation may involve not just a single item, but the way items flow, as in the case of branching through a changing series of items contingent on an examinee’s responses. The question type currently dominating largescale computer-based testing and many e-learning assessments is the standard multiple-choice question, which generally includes a prompt followed by a small set of responses from which students are expected to select the best choice. Scalise and Gifford (2006) present a taxonomy or categorization of 28 innovative item types that may be useful in computer-based assessment. They reviewed 44 papers and book chapters on item types and item designs—many of them classic references regarding particular item types—with the intention of consolidating considerations of item constraint for use in e-learning assessment designs. Organized according to the degree of constraint on the respondent’s options for answering or interacting with the assessment item or task, the proposed taxonomy (shown in Table 1) describes a set of iconic item types termed “intermediate constraint” items. Table 1 shows taxonomy based on the level of constraint in the item/task response format. The most constrained item types, at left in column 1, use fully selected response formats. The least constrained item types, at right in column 7, use fully constructed response formats. In between, there are “intermediate constraint items,” which are organized with decreasing degrees of constraint from left to right. All item types in the item taxonomy can involve new response actions and inclusion of
different media. Thus, by combining intermediate constraint types and varying the response and media inclusion, e-learning instructional designers can create a vast array of innovation assessment approaches and can arguably match assessment needs and evidence for many instructional design objectives. The need to integrate the learning and assessment is considered critical (Hyde et al., 2003), as is the assessment of complex skills.
strAtegIes For FormAtIve onLIne Assessment A range of assessment methods and tools are being used in the online environment, and a body of literature describing these uses is being developed. Below, four strategies of formative online assessment are described, which are not mutually exclusive, especially in the case, for example, of online adaptive assessment and selfassessment. For a complete review, see Booth and Berwyn (2003).
online self-Assessment with Feedback Self-assessment can enrich the learning process, as it helps students to self-monitor their learning, increases their strategic knowledge of how to go about improving, and improves their motivation. Using self-assessment information requires control over one’s cognitive activities or metacognition: students must understand what strategies and skills they should use in each task, and know when and how to use them (Brookhart, 2001); the end result is more autonomous and self-directed learners. Furthermore, self-assessment puts the learner in control of the instruction provided, and information that is irrelevant to students can be eliminated from their learning experience (Quinn & Reid, 2003). While self-assessment has typically been used after a learning activity as a review, it has an
More complex
Less complex 2. Selection/ Identification 2.A. Multiple True/ False
2.B. Yes/no with Explanation
2.C. Multiple Answer
2.D. Complex Multiple Choice
1. Multiple Choice
1.A. True/False
1.B. Alternative Choice
1.C. Conventional or Standard Multiple Choice
1.D. Multiple Choice with Media Distractors
More constrained
3.D. Assembling Proof
3.C. Ranking & Sequencing
3.B. Categorizing
3.A. Matching
3. Reordering/ Rearrangement
4.D. Bug/Fault Correction
4.C. Limited Figural Drawing
4.B. Sore-Finger
4.A. Interlinear
4. Correction/ Substitution
Less constrained
6.B. Figural Constructed Response
5.B. Short-Answer & Sentence Completion
5.D. Matrix Completion
6.D. Essay & Automated Editing
6.C. Concept Map
6.A. OpenEnded Multiple Choice
5.A. Single Numerical Constructed
5.C. Cloze-Procedure
6. Construction
5. Completion
7.D. Diagnosis, Teaching
7.C. Discussion, Interview
7.B. Demonstration, Experiment, Performance
7.A. Project
7. Portfolio/ Presentation
Formative Online Assessment in E-Learning
Table 1. Intermediate constraint taxonomy for e-learning assessment questions and tasks (adapted from Scalise & Gifford, 2006)
Formative Online Assessment in E-Learning
important role as a prelude to a specific area of study (Challis, 2005). Furthermore, if we add the potential of Internet to self-assessment benefits, the result will be a highly relevant diagnostic and assessment tool. With regard to online self-assessment marks, Taras (2001) has constantly stressed the importance of no grading self-assessment, since grades can distract and “block” the students. “By desisting from giving the students grades until the students have carried out self-assessment, they are being encouraged to focus on their work with as little emotional interference as possible” (Taras, 2001, p. 609). Students should be free to explore their knowledge and areas of weakness and to make mistakes without fear of this having consequences for their final marks. In any case, online self-assessment results are interesting for teachers, because they can identify students at risk and monitor student performance early in the course, so that the appropriate resources for helping them can be recommended. Many studies note the importance of relevant feedback and self-assessment for efficient learning (Black & William, 1998; Brookhart, 2001; Dearing, 1997; Taras, 2003). Feedback and selfassessment should be part of the same process. Taras (2003) confirmed that self-assessment without tutor feedback cannot help students to be aware of their errors, and that therefore they cannot understand their cause. Taras’ (2003) version of self-assessment has two distinctive features: minimal and integrated tutor feedback (i.e., according to the learning needs of the student), and feedback provided before a grade.
online adaptive assessment Online adaptive assessment is an innovative online form of assessment in which an examinee is presented items in a sequence dependent on the correctness of the response to the previous item. Therefore, adaptive assessment is characterized by the adaptation of the items depending on the
student’s performance (“testing on demand”). As students answer the items correctly, the complexity of the exercises grows, so that the level of difficulty is constantly revised. As Challis (2005) points out, the score is derived not from the number of correct answers, but rather from the difficulty level of the questions answered correctly. The difference between online adaptive assessment and other online assessment tools is relevant, since adaptive assessment is more dynamic and more “student centred” than other assessments. Adaptive online testing can be used as a diagnostic tool that contributes to meaningful learning, and thus, come to constitute an important type of formative assessment. Futhermore, adaptive testing makes student cheating less likely, because, as items change depending on the answers, it is quite difficult to copy an exercise and pass it to other students for them to memorize the correct answers. A significant advantage for self- or group testing is that students can specify from the outset the level at which they wish to work (for example, entry, intermediate, or advanced), and change level, with less need for repetitive drill before students can be confident they have the requisite level of ability. Online adaptive assessment can reduce testing time by more than 50% while maintaining the same level of reliability. Shorter testing times also reduce fatigue, a factor that can significantly affect an examinee’s test results (Rudner, 1998). Another advantage is that examinees at a low achievement level are not required to respond to items that are very difficult and far beyond their level, thus reducing potential negative psychological effects (e.g., examinees becoming despondent or more test-anxious). Similarly, examinees at a high achievement level are not required to answer a number of items that are much too simple for them, thus reducing the potential for boredom in that group of examinees (Computer Adaptive Assessment Project, 2005).
Formative Online Assessment in E-Learning
Despite the advantages described, computer adaptive tests also have limitations, such as possible inequities because each examinee receives a different set of questions, or the impossibility (nearly always) of going back and changing the answers. Rudner (1998) also points out that students can cheat to obtain a higher score: a clever examinee could intentionally miss early questions, so that the program assumed low ability and selected a series of easy questions. The examinee could then go back and change the answers, getting them all right. The result could be 100% correct answers, which would result in the examinee’s estimated ability being the highest ability level.
online Collaborative assessment Collaborative learning and assessment can be developed effectively in the online environment. This means that students can also participate in assessment through peer review. Using collaborative assessment, learners employ online resources to work collaboratively on a student’s course work assessment. An added benefit of this technique is the development of communication and teambuilding skills highly valued by instructors. This assessment strategy involves the student, their peers, and their tutor in thoughtful and critical examination of each student’s course work. Students can learn to comment on each other’s work as an integral part of summative assessment, and this has been shown to be an effective approach when helping them to develop a critical approach to their own written work (see, for example, Boud & Falchikov, 1989). Their performance will determine whether, as e-learners, they have internalized the new skills. In terms of trainer collaboration on assessment, some Web-based survey tools (e.g., Zoomerang, SurveyMonkey, and SurveyShare) have incorporated collaborative features to share survey templates, questions, and results (Curtis, 2002).
0
The research by McConnell (2002) shows that a positive social climate is necessary in developing and sustaining collaborative assessment, and that this form of assessment helps students to reduce dependence on lecturers as the only or major source of judgment about the quality of learning. Students develop skill and know-how about self- and peer assessment and see themselves as competent in making judgments about their own and each other’s work; this approach also creates a certain mindfulness and reflection among students (Garrison, 2003; Hiltz, 1994; Poole, 2000), which are certainly good lifelong learning skills. Sluijsmans et al. (2006) presented three arguments for the implementation of peer assessment in e-learning. First, students can play a role in the choice of performance assessment tasks and in discussing assessment. Second, the student shares responsibility and collaborates in a continuous dialogue with the teacher. Third, peer assessment can decrease the workload of teachers. Peer review is a demanding task for undergraduates, because they need the confidence firstly to judge fellow students’ work, and secondly to be able to criticize without giving offence (Macdonald, 2004).
Portfolio Portfolio assessment is defined as any method by which a student’s work is stored over time so that it can be reviewed in relation to both process and product (Knight, 1994). Portfolios contain a sample of various types of students’ work that can identify how academically successful they are, and that can allow them to share their work with peers. As it does not focus exclusively on solutions, but also on the intermediate steps and draft products involved in the performance of the task, the portfolio approach teaches students the value of self-assessment and builds self-esteem, helping them engage in meaningful learning (Chang, 2002; ITC, 2003; Reeves, 2000). The portfolio provides authentic assessment opportunities for reflection and a context in which students actively
Formative Online Assessment in E-Learning
participate in the evaluation process (Banta, 2003). Therefore, portfolio is more than a type of assessment, implying a new consideration of the teaching process; most importantly, the portfolio involves a process, rather than a final aim (Agra, Gewerc, & Montero, 2003). Trudi Cooper suggests six steps in a portfoliobuilding process (Cooper, 1997; Cooper & Emden, 2000; Cooper, Hutchins, & Sims, 1999): 1. 2.
3.
4.
5.
6.
To identify the areas of skills that the student should develop. Taking into account these skill areas, to develop specific learning outcomes to be achieved by the students. To identify appropriate learning strategies so that students can achieve their learning outcomes. To identify performance indicators that establish whether students have achieved their learning outcomes and indicate the evidence the students need to collect. To collect evidence that demonstrates the students have met the performance indicators. To organize this evidence in a portfolio so that teachers can easily understand how the evidence relates to each performance indicator.
The benefits of portfolio-based assessment over other assessment approaches have been well established (see, for example, Biggs & Tang, 1997; Brooks & Madda, 1999; Cooper, 1999; Hutchins, Sims, & Cooper, 1999). Love and Cooper (2004) point out the main advantages of this tool, such as its capacity to contain many different types of evidence and from different sources, the active involvement of students in their processes, equity, and moderation in the assessment process and its suitability to assessment in lifelong learning contexts. Moreover, portfolios provide a means for students to learn to manage their own professional development, since they offer them easy
access to evidence of professional or generic graduate skills (Cooper, 1999; Cooper & Love, 2000, 2001a, 2002). Online portfolios (the process of presenting via Web digital evidence of progress of achievement) have the added advantage of the interactivity provided by the Web, and the easier possibility of organization and updating of the material. Used in conjunction with appropriate software solutions, online portfolio-based assessment can relieve teachers of some of the more tedious aspects of assessment and permit parts of the assessment process to be automated (Cooper & Love, 2001b). As Agra et al. (2003) describe in their experience of implementation of online portfolio in a postgraduate degree, students’ portfolios were accessible online for tutors and peers, so that teachers could view students’ process and give feedback to them, and students could also interchange ideas fluidly with their peers.
A cAse study: seLF-Assessment And LeArnIng This case study illustrates an innovative teaching experience in the Psychology Faculty at the University of the Basque Country. In this study a computer-assisted assessment tool (Hot Potatoes) was used to design and apply self-assessment exercises that are automatically corrected online (Ibabe, Gómez, & Jauregizar, 2006). Learner satisfaction and learning perception was evaluated. The main aim of this project was to verify whether interactive self-assessment improved university students’ academic results on their Data Analysis course. The procedure employed was as follows. First of all, we acquired additional service of TexToys Creative Technology program (http://www.hotpotatoes.net/help/lw.php) in order to record the results of the Hot Potatoes assessment. Next, we
Formative Online Assessment in E-Learning
Figure 1. Students’ final marks depending on the use or nonuse of the self-assessment tool
% of students
40
users non-users
30 20 10 0 d
c
B
A
A+
student's final marks
designed self-assessment exercises (multiplechoice, short-answer questions, fill-in the blanks exercises, crosswords, etc.) in HTML format. Twenty items were designed for each unit of the teaching program (100 items in total). Furthermore, each unit included some revision exercises and a final exercise including content from all the units. It was explained to students that using the tool would not mean they obtained higher final marks. The self-assessment items were published on the Internet as each unit was finished in class. At the end of the semester, students completed a questionnaire about their satisfaction with the self-assessment tool, and their learning perception. Finally, summative assessment marks were awarded. A high proportion of students used the selfassessment tool (46% of all students registered), considering that the exercises were voluntary, with no extra incentive, outside of the normal timetable and with the requirement of an Internet connection. Results show an acceptable rate of satisfaction among students who used Hot Potatoes. Sixty-six percent of students “agreed” or “totally agreed” with the statement, The tool was useful for revising the content explained in the classroom. Similarly, 66% of the learners
agreed” or “totally agreed” with the statement, The exercises were useful for understanding and processing the information. In response to the item, In my opinion, self-assessment exercises can be a useful complementary tool for learning, 80% “agreed” or “totally agreed.” The mean of all of items, on a Likert scale from 1 to 5, was 3.8. The results suggest that students who use interactive self-assessment exercises as a complementary study tool obtain better final marks. There is a positive correlation between the frequency with which students use these exercises and academic results, r (81) = .24, p < .05. Obviously, the correlation between the number of exercises done and the final mark was also positive, r (81) = .25, p < .05. In other words, students who do more self-assessment exercises obtain higher final marks. Figure 1 was designed to show the kind of student that makes use of self-assessment exercises. The results indicate that the majority of students with higher marks (B, A, or A+) had used selfassessment tool. In short, interactive self-assessment exercises can act as formative assessment tools for improving students’ learning process and learning satisfaction, thus increasing the quality of education.
Formative Online Assessment in E-Learning
concLusIon: key PoInts For FormAtIve onLIne Assessment On the basis of the research, and by way of conclusion, listed below are key points to be taken into account when planning, developing, implementing, and reviewing formative online assessment of quality (Bull, 1999; Challis, 2005; Hyde et al., 2003; Kendle & Northcote, 2000; Macdonald, 2004; Stephens, 2001). 1. • •
•
•
2. •
•
•
•
•
Planning stage Assessment must be clearly related to the aims and objectives of the subject. Plan in advance the competence to be assessed and the way students can demonstrate their performance (evidence of achievement). Keep in mind that the technology available should not determine the methods used: Online assessment should be pedagogicallyled, not technology-driven. Make assessment part of the online learning process. The learning strategies and assessment strategies should be developed simultaneously. Development stage Variety: Include a range of methods for collecting evidence of competence (for example, online portfolios). Authenticity: Use open-ended tasks that simulate workplace tasks, in order to assess competence development. Collaboration: Allow interaction between learners and others, and use appropriate communication technologies. Feedback: Ensure appropriate feedback mechanisms are possible using peer feedback and peer tutoring. Learner responsibility: Provide options and opportunities for accountability within
assessment tasks. 3. Implementation stage • State the assessment criteria in advance to students; be clear about the criteria used for the assessment. • Use pretesting to alleviate student anxiety about ICTs and take account of the access to technology available to students. • Help eliminate cheating by devising ways of knowing about learners’ abilities and by gathering a range of evidence of competence. • For ethical reasons, students should be aware of how their assessment results will be used. • Try to develop learner-centred assessment, using strategies such as self-assessment, adaptive assessment or collaborative assessment, and allowing students to participate more in the assessment process. 4. Review stage • Share resources with other experts to help enhance one’s own materials. • Keep up to date with the constantly changing technology. • Review and evaluate the assessment strategies used, the evidence collected and judgments of other assessors in order to validate assessment.
Future reseArch dIrectIons Online assessment is seen by many as useful for assessing lower order skills, such as the recall of knowledge, while being not well equipped to assess higher order skills, such as the ability to apply knowledge in new situations or to evaluate and synthesize information (Ashton, Beevers, Milligan, Schofield, Thomas & Youngson, 2006). The future trends in formative online assessment research should be directed to the assessment of complex skills. ICTs are often used to simulate
Formative Online Assessment in E-Learning
the context of professional practice in education, but we know little about how to involve the assessment of competences (complex skills with their underlying knowledge structures and attitudes, van Merriënboer, 1997; for example, designing a house in the case of an architect) in e-learning. Currently, institutions of higher education are confronted with a demand for competence-based learning (CBL), which is expected to narrow the gap between learning in the educational setting and future workplace performance (Bastiaens & Martens, 2000). The assessment task is described in terms of a certain performance that is perceived as worthwhile and relevant to the learner, and can therefore be defined as performance assessment (Wiggins, 1989). Performance assessment focuses on the ability to use combinations of acquired skills and knowledge, and therefore fits in well with the theory of powerful learning environments (Linn, Baker, & Dunbar, 1991). Because the goals as well as the methods of instruction are oriented towards integrated and complex curricular objectives, it is necessary for assessment practices to reflect this complexity and to use various kinds of assessments in which learners have to interpret, analyze, and evaluate problems and explain their arguments. In CBL, it is important that a number of performance assessments are organized to gather reliable and valid information about a learner’s competence development (Sluijsmans et al., 2006). Thus, the learner is required to perform similar types of tasks in a variety of situations under the same conditions. Many efforts have been made to implement CBL in face-to-face education, but an electronic learning environment represents a still greater challenge. The integrated simulations can be used to provide answer mechanism, feedback, or different forms of assessment, and students can be assessed in the same environment in which they learn. Measuring competencies requires the implementation of new and innovative processes. Online formative assessment can be used
as a competency teaching method that arouses interest and reflective processes, activates prior knowledge, clarifies meanings, and provides information about learners’ progress. Online collaborative assessment can model appropriate learning strategies and create online communities of learners. For instance, online portfolios could be incorporated to measure complex and transferable skills. Anyway, further research is required to better understand the nature of competency-based performance assessment in e-learning and the strategies and tools needed to assess the learners’ competency.
reFerences Agra, M. J., Gewerc, A., & Montero, M. L. (2003). El portafolios como herramienta de análisis en experiencias de formación online y presenciales. Enseñanza, 23, 101-114. Armitage, S., & O’Leary, R. (2003). A guide for learning technologists. Learning and teaching support network. York, UK: LTSN Generic Center. Ashton, H.S., Beevers, C. E., Milligan, C. D., Schofield, D. K., Thomas, R. C., & Youngson, M. A., (2006). Moving beyond objective testing in online assessment. In S.L. Howell & M. Hricko (Eds.), Online assessment & measurement. Case studies from higher education, K-12 and Corporate (pp. 116-128). Hershey, PA: Idea Group. Assessment Reform Group. (1999). Assessment for learning: Beyond the black box. Cambridge, UK: University of Cambridge School of Education. Retrieved October 29, 2007, from http://arg. educ.cam.ac.uk/AssessInsides.pdf Banta, T. W. (Ed.). (2003). Portfolio assessment: Uses, cases, scores and impact. San Francisco: Jossey-Bass.
Formative Online Assessment in E-Learning
Bastiaens, T., & Martens, R. (2000). Conditions for Web-based learning with real events. In B. Abbey (Ed.), Instructional and cognitive impacts of web-based education (pp. 1-32). London: Idea Group Publishing. BECTA. (2006). Retrieved October 29, 2007, from www.becta.org.uk Biggs, J. (1998). Assessment and classroom learning: A role for summative assessment? Assessment in Education, 5(1), 103-110. Biggs, J., & Tang, C. (1997). Assessment by portfolio: Constructing learning and designing teaching. Research and Development in Higher Education, 79-87. Black, P., & William, D. (1998). Assessment and classroom learning, Assessment in Education, 5(1), 7-74. Booth, R., & Berwyn, C. (2003). The development of quality online assessment in vocational education and training. Leabrook, Australia: Australian Flexible Learning Framework. Retrieved October 29, 2007, from www.ncver.edu. au/research/proj/nr1F02_1.pdf Boud, D., & Falchikov, N. (1989). Quantitative studies of self-assessment in higher education: A critical analysis of findings. Higher Education, 18(5), 529-549. Brookhart, S. M. (2001). Successful students’ formative and summative use of assessment information. Assessment in Education, 8(2), 153-169. Brooks, B. A., & Madda, M. (1999). How to organize a professional portfolio for staff and career development. Journal for Nurses in Staff Development, 15(1), 5-10. Brown, S., & Knight, P. T. (1994). Assessing learners in higher education. London: Kogan Page. Buchanan, T. (2000). The efficacy of a world-wide Web mediated formative assessment. Journal of Computer Assisted Learning, 16, 193-200.
Bull, J. (1999). Computer-assisted assessment: Impact on higher-education institutions. Educational Technology & Society, 2(3), 123-126. Challis, D. (2005). Committing to quality learning through adaptive online assessment. Assessment & Evaluation in Higher Education, 30(5), 519-527. Chang, C. C. (2002, March). Building a Web-based learning portfolio for authentic assessment. Paper presented at Proceedings International Conference on Computers in Education (ICCE’02), Melbourne, Australia. Clariana, R. B. (1997). Considering learning style in computer-assisted learning. British Journal of Educational Technology, 28(1), 66-68. Collis, B., De-Boer, W., & Slotman, K. (2001). Feedback for Web-based assignments. Journal of Computer Assisted Learning, 17, 306-313. Comeaux, P. (Ed). (2002). Communication and collaboration in the online classroom: Examples and applications. Bolton, MA: Anker. Computer Adaptive Assessment Project. (2005). What is CAA? Retrieved October 29, 2007, from http://www.castlerockresearch.com/caa/WhatisCAA.aspx Cooper, T. (1997). Portfolio assessment: A guide for students. Perth, WA: Praxis Education. Cooper, T. (1999). Portfolio assessment: A guide for lecturers teachers and course designers. Perth, WA: Praxis Education. Cooper, T., & Emden, C. (2000). Portfolio assessment: A guide for nurses and midwives. Perth, WA: Praxis Education. Cooper, T., Hutchins, T., & Sims, M. (1999). Developing a portfolio which demonstrates competencies. In M. Sims and T. Hutchins (Eds.), Learning materials: Certificate in children’s Services; 0-6 years (bilingual support) (pp. 329). Perth, WA: Ethnic Childcare Resource Inc. Western Australia.
Formative Online Assessment in E-Learning
Cooper, T., & Love, T. (2000). Portfolios in university-based design education. In C. Swann & E. Young (Eds.), Re-inventing design education in the university (pp. 159-166). Perth, WA: School of Design, Curtin University. Cooper, T., & Love, T. (2001a). Online portfolio assessment in information systems. In S. Stoney & J. Burn (Eds.), Working for excellence in the economy (pp. 417-426). Perth, WA: We-B Research Centre, Edith Cowan University. Cooper, T., & Love, T. (2001b). Online portfolios: Issues of assessment and pedagogy. In International Education Research Conference, Melbourne. Retrieved October 29, 2007, from http://www.aare.edu.au/01pap/coo01346.htm Cooper, T., & Love, T. (2002). Online portfolios: Issues of assessment and pedagogy. In P. Jeffrey (Ed.), AARE 2001: Crossing borders: New frontiers of educational research. Coldstream, Victoria: AARE Inc. Covington, M. V. (1992). Making the grade: A self-worth perspective on motivation and school reform. Cambridge, UK: Cambridge University Press. Crompton, P. (1999). Evaluation: A practical guide to methods. Retrieved October 29, 2007, from http://www.icbl.hw.ac.uk/ltdi/implementing-it/eval.pdf Curtis, J. B. (2002). Collaborative tools for elearning. Chief Learning Office. Solutions for Enterprise Productivity. Retrieved October 29, 2007, from http://www.clomedia.com/content/templates/clo_feature.asp?articleid=41&zoneid=30
Garrison, D. R. (2003). Cognitive presence for effective asynchronous online learning: The role of reflective inquiry, self-direction and metacognition. In J. Bourne & J.C. Moore (Eds.), Elements of quality online education: Practice and direction (pp. 47-58). Needham, MA: Sloan-C. Gipps, C. (1994). Beyond testing: Towards a theory of educational assessment. London: Falmer Press. Haladyna, T. (1997). Writing test items to evaluate higher order thinking. Needham Heights, MA: Allyn & Bacon. Henly, D. C. (2003). Use of Web-based formative assessment to support student learning in a metabolism/nutrition unit. Journal of Dental Education, 7(3), 116-122. Hiltz, S. R. (1994). The virtual classroom: Learning without limits via computer networks. Worwood, NJ: Ablex. Hutchins, T., Sims, M., & Cooper, T. (1999). Developing a portfolio which demonstrates competencies. In M. Sims & T. Hutchins (Eds.), Learning materials: Certificate in children’s services; 0-6 years (bilingual support) (pp. 329). Perth, WA: Ethnic Childcare Resource Inc. Western Australia. Hyde, P., Booth R., & Wilson, P. (2003). The development of quality online assessment in VET. In H. Guthrie (Ed.), Online learning: Research readings (pp. 87-106). Leabrook, South Australia: NCVER.
Davis, S. L., & Morrow, A. K. (2004). Creating usable assessment tools: A step-by-step guide to instrument design. Retrieved October 29, 2007, from http://www.jmu.edu/assessment/wm_library/ID_Davis_Morrow_AAHE2004.pdf
Ibabe, I., Gómez J., & Jauregizar, J. (2006). Aplicación de pruebas de auto-evaluación interactivas para potenciar el trabajo autónomo de los estudiantes conforme al sistema ECTS. In J. Guisasola & T. Nuño (Eds.), La educación universitaria en tiempos de cambio (pp. 63-74). San Sebastián, Spain: Universidad del País Vasco.
Dearing, R. (1997). Higher education in the learning society. London: HMSO.
Ibabe, I., & Jauregizar, J. (2005). Ejercicios de autoevaluación con Hot Potatoes. In I. Ibabe &
Formative Online Assessment in E-Learning
J. Jauregizar (Eds.), Cómo crear una web docente de calidad (pp. 65-100). A Coruña, Spain: Netbiblo. ITC. (2003). Online assessment techniques. Retrieved October 29, 2007, from http://web.utk. edu/~dsuppach/indep/assessment2.htm. James, R., McInnis, C., & Devlin, M. (2002). Assessing learning in Australian universities. Canberra, Australia: Center for the Study of Higher Education, The University of Melbourne & The Australian Universities Teaching Committee. Kendle, A., & Northcote, M. (2000). The struggle for balance in the use of quantitative and qualitative online assessment tasks. In Proceedings ASCILITE 2000, Coffs Harbour. Retrieved October 29, 2007, from http://www.ascilite.org. au/conferences/coffs00/papers/amanda_kendle. pdf Kennedy, J. K., Sang, J. C. K, Wai-ming, F. Y., & Fok, P. K. (2006, May). Assessment for productive learning: Forms of assessment and their potential nd for enhancing learning. Paper presented at the 32 Annual Conference of the International Association for Educational Assessment, Singapore. Knight, M. E. (1994). Portfolio assessment: Application of portfolio analysis. Lanham, MD: University Press of America. Kulhavy, R. W., & Stock, W. A. (1989). Feedback in written instruction: The place of response certitude. Educational Psychology Review, 1(4), 279-308. Lee, G., & Weerakoon, P. (2001). The role of computer-aided assessment in health professional education: A comparison of student performance in computer-based and paper-and-pen tests. Medical Teacher, 23, 152-157. Linn, R. L., Baker, E. L., & Dunbar, S. B. (1991). Complex, performance-based assessment: Expectations and validation criteria. Educational Researcher, 20(8), 15-21.
Love, T., & Cooper, T. (2004). Designing online information systems for portfolio-based assessment: Design criteria and heuristics. Journal of Information Technology Education, 3, 65-81. Lowry, R. (2005). Computer-aided self-assessment. An effective tool. Chemistry Education Research and Practice, 6(4), 198-203. Macdonald, J. (2004). Developing competent e-learners: The role of assessment. Assessment & Evaluation in Higher Education, 29(2), 215226. Mason, B. J., & Bruning, R. (2000). Providing feedback in computer-based instruction: What the research tells us. Retrieved October 29, 2007, from http://dwb.unl.edu/Edit/MB/MasonBruning.html McConnell, D. (2002). The experience of collaborative assessment in e-learning. Studies in Continuing Education, 24(1), 73-92. OECD. (2005). Formative assessment: Improving learning in secondary classrooms. Paris: OECD. Parshall, C. G., Davey, T., & Pashley, P. J. (2000). Innovative item types for computerized testing. In W. Van der Linden & C. A. W. Glas (Eds.), Computerized adaptive testing: Theory and practice (pp. 129-148). Norwell, MA: Kluwer Academic Publisher. Pintrich, P. R., & Schrauben, B. (1992). Students’ motivational beliefs and their cognitive engagement in classroom academic tasks. In D. H. Schunk & J. L. Meece (Eds.), Student perceptions in the classroom. Hillslade, NJ: Lawrence Erlbaum. Plous, S. (2000). Tips on creating and maintaining an educational World Wide Web site. Teaching of Psychology, 27, 63-70. Poole, D. M. (2000). Student participation in a discussion-oriented online course: A case study. Journal of Research on Computing in Education, 33(2), 162-177.
Formative Online Assessment in E-Learning
Quinn, D., & Reid, I. (2003). Using innovative online quizzes to assist learning. Retrieved October 29, 2007, from http://ausweb.scu.edu. au/aw03/papers/quinn/paper.html Reeves, T. C. (2000). Alternative assessment approaches for online learning environments in higher education. Journal of Educational Computing Research, 23(1), 101-111. Rowntree, D. (1982). Educational technology in curriculum development. Newcastle upon Tyne, UK: Athenaeum Press Ltd. Rudner, L. M. (1998). An online, interactive, computer adaptive testing tutorial. Retrieved October 29, 2007, from http://edres.org/scripts/cat Ryan, Y. (2000). Assessment in online teaching. Paper presented at the Australian Universities Teaching Committee Forum 2000, Canberra, Australia. Retrieved October 29, 2007, from http://www.autc.gov.au/forum/papers/onlineteaching1.rtf Sadler, D. R. (1989). Formative assessment and the design of instructional systems. Instructional Science, 18, 119-144. Sadler, D. R. (1998). Formative assessment: Revisiting the territory. Assessment in Education, 5(1), 77-84. Scalise, K., & Gifford, B. (2006). Computerbased assessment in e-learning: A framework for constructing “Intermediate Constraint” questions and tasks for technology platforms. The Journal of Technology, Learning, and Assessment, 4(6), 1-44. Schulze, A., & O’Keefe, A. (2002, August). Effectively using self-assessment in online learning. Paper presented at the 18th Annual Conference on Distance Teaching Learning, Madison, Wisconsin. Scriven, M. (1967). The methodology of evaluation. In R. Tyler, R. Gagne & M. Scriven (Eds.),
Perspectives of curriculum evaluation (pp. 39-83). Chicago: Rand McNally. Sherman, R. C. (1998). Using the World Wide Web to teach everyday applications of social psychology. Teaching of Psychology, 25, 212-216. Sluijsmans, D. M. A., Prins, F. J., & Martens, R. L. (2006). The design of competency-based performance assessment in e-learning. Learning Environments Research, 9, 45-66. Stephens, D. (2001). Use of computer assisted assessment: Benefits to students and staff. Education for Information, 19, 265-275. Stiggins, R. J. (1987). Design and development of performance assessment. Educational Measurement: Issues and Practice, 4, 263-273. Taras, M. (2001). The use of tutor feedback and student self-assessment in summative assessment tasks: Towards transparency for students and for tutors. Assessment and Evaluation in Higher Education, 26(6), 606-614. Taras, M. (2003). To feedback or not to feedback in student self-assessment. Assessment and Evaluation in Higher Education, 25(5), 549-565. Van Merriëboer, J. J. G. (1997). Training complex cognitive skills. Englewood Cliffs, NJ: Educational Technology Publications. Velan, G. M., Killen, M. T., Dziegielewski, M., & Kumar, R. K. (2002). Development and evaluation of a computer-assisted learning module on glomerulonephritis for medical students. Medical Teacher, 24(4), 412-416. Wang, K. H., Wang, T. H., Wang, W. L., & Huang, S. C. (2006). Learning styles and formative assessment strategies: Enhancing student achievement in Web-based learning. Journal of Computer Assisted Learning, 22(3), 207. Ward, M., & Newlands, D. (1998). Use of the Web in undergraduate teaching. Computers and Education, 31, 171-184.
Formative Online Assessment in E-Learning
Wiggins, G. (1989). A true test: Toward a more authentic and equitable assessment. Phi Delta Kappan, 70, 703-713.
Frederiksen, N. (1984). The real test bias: Influences of testing on teaching and learning. American Psychologist, 39, 193-202.
William, D., & Black, P. (1996). Meanings and consequences: A basis for distinguishing formative and summative functions of assessment. British Educational Research Journal, 22, 537-48.
Hoskins, S. L., & van Hooff, J. C. (2005). Motivation and ability: Which students use online learning and what influence does it have on their achievement? British Journal of Educational Technology, 36(2), 177-192.
Zakrzewski, S., & Bull, J. (1999). The mass implementation and evaluation of computer-based assessments. Assessment & Evaluation in Higher Education, 23(2), 141-152.
AddItIonAL reAdIngs Barbosa, H., & García, F. (2005, July). Importance of online assessment in the e-learning process. Paper presented at the ITHET 6th Annual International Conference, Juan Dolio, Dominican Republic. Bennett, R. E., Goodman, M., Hessinger, J., Ligget, J., Marshall, G., Kahn, H., et al. (1999). Using multimedia in large-scale computer-based testing programs. Computers in Human Behavior, 15, 283-294. Billings, D. (2000). A framework for assessing outcomes and practices in Web-based courses in nursing. Journal of Nursing Education, 39, 61-67. Boud, D., Cohen, R., & Sampson, J. (1999). Peer learning and assessment. Assessment and Evaluation in Higher Education, 24(4), 413-426. Bracey, G., & Rudner, L. M. (1992). Person-fit statistics: High potential and many unanswered questions. Practical Assessment, Research & Evaluation, 3(7). Retrieved October 29, 2007, from http://PAREonline.net/getvn.asp?v=3&n=7 Comeaux, P. (2005). Assessing online learning. Bolton, MA: Anker Publishing Company, Inc.
Howell, S. L., & Hricko, M. (2006). Online assessment and measurement. Case studies from higher education, K-12 and corporate. London: Information Science Publishing. Hricko, M., & Howell, S. L. (2006). Online assessment and measurement. Foundations and challenges. London: Information Science Publishing. Huba, M. E. (2000). Learner-centred assessment on college campus. Shifting the focus from teaching to learning. London: Allyn and Bacon. McDonald, B., & Boud, D. (2003). The effects of self assessment training on performance in external examinations. Assessment in Education, 10(2), 210-220. McIntosh, M. E. (1997). Formative assessment in mathematics. Clearing House, 71(2), 92-97. Peat, M., & Franklin, S. (2002). Supporting student learning. The use of computer-based formative assessment modules. British Journal of Educational Technology, 33(5), 515-523. Pellegrino, J., Chudowsky, N., & Glaser, R. (2001). Knowing what students know: The science and design of educational assessment. In Center for Education (Ed.). Washington, DC: National Academy Press. Ramaprasad, A. (1983). On the definition of feedback. Behavioral Science, 28(1), 4-13. Ricketts, C., & Zakrzewski, S. (2005). A riskanalysis approach to implementing Web-based assessment. Assessment & Evaluation in Higher Education, 30(6), 603-620.
Formative Online Assessment in E-Learning
Roberts, T. S. (2006). Self, peer and group assessment in e-learning. London: Information Science Publishing.
Taras, M. (2002). Using assessment for learning and learning from assessment. Assessment & Evaluation in Higher Education, 27(2), 501-510.
Silberman, M. (1996). Active learning: 101 strategies to teach any subject. Needham Heights, MA: Allyn & Bacon.
Yan, Z., Hao, H., Hobbs, L. J., & Wen N. (2003). The psychology of e-learning: A field of study. Journal Educational Computing Research, 29(3), 285-296.
Stake, R. E. (1995). The art of case study research. Thousand Oaks, CA: Sage. Stiggins, R. (2005). Assessment FOR learning: Building a culture of confident learners. In R. DuFour, R. Eaker & R. DuFour (Eds.), On common ground: The power of professional learning communities. Bloomington, IN: Solution Tree (formerly National Educational Service). Straus, S., Miles, J., & Levesque, L. (2001). The effects of videoconference, telephone, and face-to-face media on interviewer and applicant judgments in employment interviews. Journal of Management, 27, 363-381.
00
Yorke, M. (2003). Formative assessment in higher education: Moves towards theory and the enhancement of pedagogic practice. Higher Education, 45(4), 477-501.
0
Chapter XVII
Designing an Online Assessment in E-Learning María José Rodríguez Conde Universidad de Salamanca, Spain
ABstrAct In this chapter we carry out analysis of the term “assessment,” applied over all the elements which constitute the environment of formation (evaluation), and also particularizing in the assessment of the learning process, developed in the frame of what we call e-learning. The perspective guiding text is of a methodological and pedagogical nature. We try to plan the assessment process in online formation environments dealing in depth with the different elements which constitute it: objectives and functions of assessment, assessment criteria and indicators, people involved and assessment agents, software instruments and tools for the collection of data, and analysis of the information and reports. We raise a discussion about institutional strategies for the incorporation of this e-assessment methodology in higher educational institutions and come to the final conclusions about the validity and appropriateness of the e-learning assessment processes.
IntroductIon Evaluation covers a wide semantic field which should be explained from the outset. The terms evaluation, assessment, and evaluating research appear to refer, indistinctly in some cases, to the different types of evaluation processes. Each one of these terms alludes to a slight difference in the concept. The term assessment is used when referring to the evaluation of people from there
we have the term learning assessment; the term program evaluation, however, defines the evaluation of programs. With regards to the definition of the general concept of “evaluation,” Stufflebeam (1999, p. 3), when dealing with evaluation of educational programs, defines the term as a study designed and conducted to assist some audience to measure an object’s merit and worth. Evaluation, on the other hand, when referred to as assessment, is used to
Copyright © 2008, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
Designing an Online Assessment in E-Learning
determine the objective level of a variable that is of interest (marks in a test, level of interaction, time in responding, etc.), constitutes part of the previous evaluation concept. In our pedagogic context evaluation is defined, in a general sense, as the set of systematic processes of collection, analysis, and interpretation of valid and reliable information, which when compared with a point of reference or criteria allows us to make a decision which is favourable to the improvement of the object being evaluated. We are interested in highlighting three aspects of this concept. First, to evaluate is not to know something, nor have an opinion on something and express it. Evaluation is a process wherein we, as teaching professionals, develop a process which adheres to a methodology, certain techniques (conditions), which is, therefore, far more than mere incidental knowledge, intuition, or opinion. From this arises the concept of measurement. Without measurement, the very valuation leads us to a subjective opinion; we would not be carrying out an objective evaluation. Second, we evaluate precisely when we are able to establish a comparison between the information which is available to us and some of the reference frameworks, criteria, or norm-types which govern our actions. In this case, different types of evaluation are usually differentiated: normative, with external referent or criteria and personalised evaluation. And, finally, the evaluation process concludes with decision making. This is one of the aspects which is acquiring increasing importance within the current concept of evaluation, above all because it tries to link the evaluation process with the process of improvement, therefore decision making should be carried out with the aims of optimizing the very process/activity which is being evaluated. Then, the object of evaluation on which we are going to centre this chapter will be set on “Learning” in educational spaces of the denominated e-learning environments.
0
As an example of the importance that computer-assisted assessment programs are acquiring nowadays, and the problem that goes joined to them, we want to cite Gibbs (2006, p. 18), when he asserts that in the United Kingdom, ...the implementation of institutions-wide “virtual learning environments” (or e-learning) has made it much easier to use simple forms of computer–based assessment and there has been ever more funding, projects, dissemination and staff development to support those who would like to use such methods. Unlike the USA, much use of computer–aided assessment in largely formative in nature: to give students practice and feedback and to highlight where more studying might be appropriate before the “real” assessment at a later point. Nevertheless, it is a fact that if formation through this type of environments is actually carried out (e-learning) a process of evaluation of it must be associated. How is it carried out, what computing tools are available, which is the security of these systems, how results are transmitted to students, and so forth, will be problems to be solved from a technological point of view. What is the purpose of this evaluation, under which criteria the obtained data are compared, which will be the standard or reference point established to obtain different levels of performance, and so forth, are problems of a pedagogical type that we are going to try to analyse here. Besides all these considerations, there is a series of peculiarities that have to be taken into account in virtual learning environments, such as the following: •
In a learning programme which offers the student some type of recognition of the level achieved, the evaluation of performance will be the first concern when tackling the teaching-learning process. If the student is also interested in acquiring new concepts
Designing an Online Assessment in E-Learning
•
•
•
•
or strategies, the self-evaluation or learning evaluation processes will be very useful. In e-learning programmes, it is fundamental to evaluate participation and contrast whether the students have attained certain learning goals or not and, hence, whether they have reached the objectives of the course. In online courses it is essential for the student to receive feedback on progress in the course. This also serves as a motivating element In online teaching, contrary to expectations, there is much material for evaluating the students, given that a large part of the communication is written. Hence, the assessment process must be planned, the assessment strategies have to be coherent with the material provided online and the criteria or references for assessment must be stated explicitly so that the student at a distance will know what the student is going to be evaluated on, how, when, and under what criteria the work will be assessed.
As García Carrasco et al. (2002) point out, evaluation is an essential part of the teaching-learning process, as a measurement of the achievement of the learning objectives on the part of the student and also as a control of the quality of this process. However, current evaluation instruments have many limitations in the reading-writing context, which can be partly overcome by the new information and communication technologies. New technologies are emerging which make it possible to construct more complete models that are closer to the evaluation criteria. Mostly they are technologies oriented towards objects, which provide greater advantages than traditional technologies. Likewise, the Web is evolving towards a modular structure, following this same philosophy. The Web is a universal space of information, but it is a matter of turning it into a universal space of knowledge.
metrodoLogy Assessment In e-LeArnIng The perspective guiding text is of a methodological and pedagogical nature; therefore, we try to plan the online evaluation process dealing in depth with the different elements which constitute it: objectives and functions of assessment, assessment criteria and indicators, people involved and assessment agents, software instruments and tools for the collection of data, and analysis of the information and reports
Aims of the Assessment A first element to consider in a learning evaluation process of students in any formative scale is the aim of assessment: Why do we assess? In this moment, we should reflect on two concepts associated to evaluation and differentiated by their purpose; we are referring to the concepts of “formative assessment” and “summative assessment” (Scriven, 1967). Formative evaluation is mainly developed during the educational process because its objective is to improve it as long as the different tasks or learning-teaching activities are carried out; that is to say, formative evaluation centres its focus on the development phase. Is a type of assessment intimately joined to continuous assessment; both are carried out during the learning-teaching process. On the other hand, summative assessment tries to verify whether the objectives of a certain programme have been achieved or not, whether both have been both efficient and effective, and, therefore, is planned at the end, once the application of it has finished (Rosales, 1990). Nowadays, references to the concept of “assessment” in a general sense try more to stress the idea of formative (Charman, 2005; Robinson & Udall, 2006) than summative, insisting in the improvement of the students learning process, when making it easier and promoting a reflexive attitude necessary after the feedback received.
0
Designing an Online Assessment in E-Learning
Assessment has been, as an element of the educational process, characterised as the critical factor in the teaching-learning process (Brown, Bull, & Pendleburg, 1997; Bull & Mckenna, 2001; McAlpine, 2002; Warburton & Conole, 2003). If the term assessment has had a negative connotation, it has been due to its consideration outside the learning improvement process. Pérez Juste (2006, p. 24) states that: ...to assessment it is assigned, or recognised, the function of improvement; a function which, by the way, is totally coherent with the essence of educative acts: we have to bear in mind that education is a systematical and intentional activity at the service of improvement of people. This supposes to have analysed education regardless of educational quality, when both concepts must actually be interconnected. Assessment is the element which contributes in a higher degree to the quality of the teaching-learning process, and is, therefore, necessary to be planned, useful, coherent and ethical.1 Underlying the importance of the relation existent between assessment and quality, we want to stress the following words: Nowadays when so much about quality is said, it is important to state that assessment and education are closely bound together. It is possible that not assessment nor quality have sense, at least in education, if they are considered as independent parts (Zabalza, 2001, p. 270). In brief, assessment is an important part in the teaching-learning process, acting as a means for the consecution of the objectives of the learning process by the student, and also as a means to control the quality of the cited process (Pérez Juste, López, Peralta, & Municio, 2004).
0
Assessment criteria and Indicators We evaluate precisely when we are in a position to establish a comparison between the information available and one of the reference frameworks, criteria, or normotypes that govern our action. In this case, different types of assessment are usually identified: normative, with an external or criterial referent, and personalised assessment. When no explicit assessment criteria exist, the objectives of the learning process become assessment referents. When, on assessing students’ learning, for example, we take as a referent the group the subject belongs to, conditioning the individual’s mark by the student’s relative position in it, we are in assessment contexts with reference to the norm, or normative evaluation. If the assessment is made in reference to previously specified criteria, that is, the passing of educational objectives, we will be in assessment situations with reference to a criterion, or criterial assessment (Popham, 1983). And, if the subject’s results are compared to that subject’s own previous results, we shall be in personalised assessment. Wise et al. (2006) write a chapter from the perspective of an engineering teacher at Penn State University, showing two different online assessment systems which are used to prepare the professional accreditation in engineering. This online self-assessment instrument captured on a weekly basis three types of class level data from the faculty: learning goal(s), learning activities to support each goal, and performance summary. The second online assessment system developed focused on three data sources or measurements that would provide further evidence to the accreditation agency that outcomes were being met (criterial assessment). Those three data targets included student performance on each outcome, faculty perception of course effectiveness, and student’s perception of their own degree of mastery
Designing an Online Assessment in E-Learning
at the outcome level. This triangulation of data clearly provides the multilevel, outcomes-based measurements critical for not just accreditation but, even more importantly, continuous improvement and advancement of student learning ( formative assessment). If the object for evaluation is the learning process, we must establish a psychological reference frame from where we can plan this assessment process. An interesting approach is that made by Mishra (2002) exposing the contribution of the three psychological theories which have had more impact on the design and the institutional practice in the online learning environments (behaviourism, cognitive psychology, and constructivism). In respect to the learning assessment indicators (information to gather), when the conception of learning is undertaken from the perspective of constructivist psychology, we should consider when collecting information for an evaluation, different fields of knowledge. In this sense, in accordance with the training program aims and according to the course level and characteristics, the learning information can refer to three large fields: conceptual (knowledge, comprehension, application, analysis, synthesis, and valuation), skills or abilities, and attitudes. In this study we will suggest that the information gathering strategies offer online training platforms which are more
suited to the types of content to be evaluated. Learning evaluation indicators should be linked to the goals one is trying to evaluate, which in the case where no other type of criteria or referent exists, this goal becomes one (O’Donovan, Price, & Rust, 2004). As an example, we have associated dimensions to be evaluated in the student with indicators or more suitable procedures.
People involved and assessment Agents In this section we refer to whom the people involved in the assessment process of students are who can provide us with information about the acquisition level of the contents explained. Linked to the concepts of evaluation we have been analysing, another typology appears, according to the agent carrying out the evaluation: self-evaluation and hetero-evaluation. In the case of learning, the student can evaluate the effort made better than anyone else, as well as the difficulties and satisfaction caused by the learning. These cases of self-evaluation processes are more adequately approached if we are in situations of formative evaluation. On the other hand, processes of summative evaluation require systems of hetero-evaluation, or assessment by other agents to complement the former.
Table 1. Constructivist tasks vs. Web tools (Mishra, 2002, p. 494) Constructivist tasks
Web tools
Establishment of personal and group objectives/goals
E-mails, discussion groups, note pads
Discuss and debate ideas and receive feedback
E-mails, discussion groups, voice-chat
Seek and collect information
Web page, search engines, digital drop, boxes, book marking
Organizing information in a coherent framework
Software to analyze data, prepare labels, charts and concept maps
Integrate different external information to internal conceptions
Note taking, annotations, and so forth.
Generate/construct new information
HTML, editors, Web page creation tools, word processors, and so forth.
Manipulate external information and variables
Simulation and animation on the Web
Understanding real world phenomenon
Streaming media technology for audio and video
0
Designing an Online Assessment in E-Learning
Table 2. Learning evaluation dimensions and indicators Dimensions
Indicators
Acquisition of conceptual contents
-
Correct answers in open answer objective tests and so forth. Production of work , tasks, projects, and so forth, via online.
Acquisition of procedural contents (skills, etc.)
-
Production of work via online, projects, group assignments, wikis, portfolios, and so forth.
Acquisition of attitudes
-
Forms, online questionnaires, chats, discussion forums, and so forth.
We understand that in distance learning, addressed to adults seeking a certain qualification, the criterial, summative, hetero type of evaluation would be the most suitable for certifying that they have satisfactorily achieved the objectives formulated in the educational process and that they fit the profile of the course. However, the online system of learning, based on the new information and communication technologies, is going to favour or foster formative evaluation systems, based on self-evaluation with objective marking systems that will help the students to situate themselves in the level of learning achieved and lead the process back to higher levels of performance.
Information gathering techniques for e-Assessment in e-Learning The following step in a methodological process valid to assess the learning in e-learning environments is to select the information gathering technique suitable for each learning objective. For this purpose, learning management systems (LMS) offer us different alternatives. Each tool comes with advantages and disadvantages that we are explaining below. One of the computer applications most used in student evaluation is the software for the designing of objective tests (closed answer) with the possibility of self-correction (Ashton et al., 2006, experience of SCROLLA project in Scotland). This does not mean that the Internet does not offer other resources of high pedagogical value, although the use of the technology may not be simple. In this section, we shall briefly describe
0
different procedures for evaluating the level of competence acquired by the student and which can be used on the Internet. In the following chart, we give a general classification of them, together with the potential use of the technology. The use of different evaluation strategies through the Internet mainly depends on the type of learning we wish to evaluate and how we wish to use the evaluation. If the objective is merely summative, and the level of learning relates to knowledge acquired, we shall deduce that the most suitable way will be the use of objective tests. On the other hand, if we are seeking to evaluate with a formative purpose, in a context of constructivist learning, which allows motivation to be included as an important factor, we shall have to resort to some system of self-evaluation, with the necessary immediate feedback. Within all these evaluation strategies we observed two categories, those procedures that are now normally used in attendance teaching (traditional tests) and another group of tests that are lately being incorporated to evaluation (alternative tests). It seems evident that certain technological resources incorporated to the use of computers open up new possibilities for these new approaches to the recording of information. This is obvious in the case of the portfolios strategy, now incorporated to many educational software packages and whose use is beginning to show a greater commitment of the students in self-evaluation and self-learning (Klenowski, 2002). Electronic mail, data bases, and discussion lists, for their part, make it possible to store and exchange the students’ work in its process and in its products,
Designing an Online Assessment in E-Learning
Table 3. Tools assessment and potential use of the technology (Rodríguez-Conde & Herrera-García, 2005) Evaluation instruments
Potential use of the technology
a) Objective tests (closed exams)
HIGH: the evaluation system can be completely computerised (from the design of the test to its correction and the preparing of reports).
b) Open answer tests (essay, short answer, and so forth, exams)
LOW: this would require recognition of key words, phrases, and so forth (content analysis procedures).
c) Practical exams (experimental tasks, simulations, observation, and so forth)
MEDIUM: the technology can save data, analyse, use OMR.
d) Oral exams (before a board)
LOW: video-streaming technology could help.
e) Projects or papers (research, case studies, diagnoses, and so forth)
LOW/MEDIUM/HIGH: depending on the type of content.
f) Self-evaluations
HIGH: The process can be completely computerised, with the incorporation of immediate feedback.
g) Portfolios
LOW/HIGH: depending on the organization of the contents to be evaluated and the correction procedure of the type of contents.
as well as to accelerate feedback mechanisms in both directions. But, once again, also in the case of evaluation, technology can serve to make a certain philosophy of learning operative. We think it is interesting to point out that, among the many studies carried out on the assessment that teachers make of the effectiveness of different instruments for evaluation through the Internet, there is a general opinion concerning the importance of feedback and the effect it has on the student’s learning. Charman (2005) points out what the main advantages of the use of evaluation through the Internet may be: • •
•
•
Frequency of the evaluation Immediate feedback (the students like to receive feedback since they connect their production with the results, they are offered help and guidance and all this motivates them to continue studying) Immediate correction of work by the teachers (so that mistakes can be detected and, if necessary, certain contents adapted) Reliability in measurement (stability, accuracy)
• •
Flexibility of access (space and time) Motivation of the students
From all the studies on evaluation through the Web it can be concluded that this is beneficial, both for the student and for the teacher. For the students, it helps them to improve their competence level, motivates them for study and, in short, is a useful tool for learning. The teachers value the facility of being able to send the student immediate feedback by means of comments or guidance. However, there should be some modification in the instruction design. If we only change in the fact that the assessment will be automated (correction and preparation of reports for the student), there will be no change in the students’ learning. What is really effective is the quality of the feedback that these technological tools allow us, that is, using the evaluation in its formative modality in which the student is offered detailed information on the student’s performance. Among the software applications currently on the market, after evaluating them through the demonstration modality over a period of time, we consider that the possibilities offered by Perception (http://www.questionmark.com), compared
0
Designing an Online Assessment in E-Learning
with those of other programmes, makes it one of the most suitable for carrying out an evaluation in a formative context, as we pointed out. This software was created by a British company and has been in use for over 10 years, in its corresponding versions. The facilities of this programme are summarised above all in three aspects: a.
b.
The type of questions that can be designed using its latest version are: multiple choice, multiple answer, gap-filling, questions on an interactive image, matrix, selection, introducing number, explanation, and development question. It also makes it possible to offer a type of feedback of elaboration in which more exhaustive and personalised explanations are offered, according to the student’s response. Furthermore, it makes it possible to adapt the exam in the style of computerised adaptive tests, through the concept of “blocks” within an exam. This system makes it possible to divide the test into parts, with different questions, and gives the possibility of creating conditions among the different blocks, so that when the student finishes one
c.
block, according to the score obtained, the student has the possibility of repeating it or having other questions sent or considering the exam concluded. The great variety of types of reports it presents based on exams done by the students of a group. Thus we can obtain through the Internet both individual and group results, presenting the appropriate statistics.
In short, it is a product with many possibilities for encouraging processes of self-evaluation of learning, according to the competence levels manifested, and in turn it allows teachers to follow up each student’s progress. However, in all these systems there is a clear need for a technical support that will facilitate and manage the good functioning of the tool, especially the need for a server whose maintenance is guaranteed.
data Analysis and Assessment report While carrying out the analysis of all the information that we can gather from a student in elearning environments, a problem appears when
Figure 1. Preliminary view of the edition of an item. Perception 3.0
0
Designing an Online Assessment in E-Learning
Figure 2. Preliminary view: Incorrect answer and associated feedback. Perception 3.0
establishing the assessment criteria, in the case of summative assessment, over which we are going to judge the execution and the level of accomplishment of the objectives established. Within these criteria, we find the standards or departing points we should establish for each level. To carry out these processes, the computing tool must contain automatic systems which allow us to program each of the levels and produce the result in an automatic form, supplying all the necessary information to fulfil an individual report for each student. One of the solutions and reports that an LMS such as Moodle (http://moodle.org/) presents can be observed below. This tool allows the teacher to owe several parameters over which to configure different tests of an objective type (items with a unique true answer) and some security parameters. The student is able to answer and consult the results obtained item by item and the student’s final punctuation. On the other hand, it is very interesting to know how each of the tests the students have taken has functioned, especially when using objective tests. A simple analysis of the items of the test, where it is reflected in form of a chart exportable to other programs (calculation sheets, statistics
packets as SPSS http://www.spss.com/, etc.) each of the typical indicators of the Classis Test Theory (difficulty and discrimination indexes), as well as average difficulty of the test as a whole and reliability tests, and so forth. The problem-to-be-solved in the data analysis is found in the moment when the information gathered from the student is too extensive (elaboration of a project, essay-form answer, etc.). Nowadays, systems based on the analysis of contents under tools type data base already exist. For example, Margerum-Leys and Bass (2006) show us that a data base is a tool designed to organise and manage large amounts of information following a general process of content analysis, establishing analysis units, category systems (e.g., “very good,” “good,” “needs improvement”) and an adjudication system of the units to the categories; these systems can help to the analysis of a large group of information. Another alternative being considered is to work with the possibilities of the so called “Semantic Web,” and natural language processing. Furthermore, there are programs as Nid.ist, Atlast, Aguad, Ethnograph, which facilitate this content analysis through the analysis of contents based in the pre-
0
Designing an Online Assessment in E-Learning
Figure 3. Report of student results (Moodle)
Figure 4. Report of test or item analysis (Moodle)
0
Designing an Online Assessment in E-Learning
vious establishment of the analysis categories or in the quantitative lexicometric study.
dIscussIon the Problem of Learning Assessment Based on the use of internet: management alternatives in higher Education Environments nowadays A computer-based assessment system (CBA) requires group work of students, teachers, and supplementary people, both technical and pedagogical. By reason of it, much relevance has been given to the adoption of different roles that all participants, one way or another, must cope with. We are referring to the implication needed for the correct development of the CBA so that every component has a fixed set of activities described in Figure 5. Any CBA system must offer the possibility to plan exams with different formats (unique answer, cases study, and so forth); besides this, when an exam is designed, aspects related to the novelty that the system involves must be taken into consideration; we know it is convenient to facilitate a previous approach of the student to the
evaluation system the student is going to use, to the software navigation system, the formulation of questions, and so forth. The creation of specific centres for assessment through Internet is a strategy coming from the Anglo-Saxon context where there are centres which have worked on the online assessment. A good example of it is the centre known as SCROLLA2 (The Scottish Centre for Research into Online Learning and Assessment), gathering the participation and the work as a whole of (inner-university) different universities: Edinburgh, Glasgow, and Strathdyde. The work developed in it is concerned about providing and carrying out investigations related to the education technologies, including their direct relation with the assessment processes. In this Centre, the work performed has dealt with the existing topics related to online learning, online assessment, computer-assisted assessment, computer assessment, investigation, and information and communication technologies. In the previously cited centre, a series of conferences and seminars directly related to evaluation and the uses of technologies are planned, among which we can stress: the future of computer-assisted assessment and the e-portfolio, among others. They are also working on the implementation of a Masters Degree in e-learning, including in
Figure 5. Role of participants in an e-assessment (own elaboration from Zakrzewske & Steven, 2003)
STUDENTS
-
Change the learning way Improve the abilities Reflect on the process Work in proper place and time.
TEACHERS
- Time to develop the tests efficiently -Reduction of charges to liberate teachers and allow them develop other areas of interest. - Quick feedback
SUPPLEMENT ARY STAFF
- Implementation and development of CBA systems. - Deep knowledge about the adequate software.
MANAGER
- Coordination and direction of the team. - Inversion on software and hardware. - Integration of the preexistent evaluation systems.
Designing an Online Assessment in E-Learning
its courses several content blocks such as e-learning strategies, investigation methods, and online evaluation among others. Work proposals in SCROLLA are lead to reinforce the incorporation of TICs in the teaching, learning, and formation processes, including assessment processes. In the implementation of the online assessment, the problem is not, in the first instance, whether students possess computers connected to Internet, but the quality of the access, that is to say, that the access could be simultaneously carried out through, for example, wifi, without appearing technical effects interfering or negatively influencing the development of assessment exercises. Another centre, Computer-Assisted Assessment3 (CAA), was a result of consortium between four institutions: The University of Luton, Glasgow, Loguhborough,4 and Oxford Brookes.5 Among the objectives and priorities of the cited centre (CAA) we can emphasize the following ones: promote the use of CAA in British higher education, identify and develop a good practice integrated in the study plans, develop models, paradigms to the strategic development of CAA in departments, institutions and any other centre, promote the use of materials helping people interested in practising any assessment through technologies possible. We must emphasize that this centre provides information about the use of computer-assisted assessment in higher education and was obtained in a project about this same topic initiated in 1998 and finished in 2001, despite its specific projects and “blueprint” investigations or reports whose content is very varied, form assessment principles to item-analysis summaries or any type of content related to the issue treated. During those 3 years, the centre was an important referent which facilitated actualized information about the CAA activities planned, the models developed, conferences, and so forth. On the centre’s Web site, we can see that it consists of six sections (Centro, history, outcomes, contact us, blueprint) which we can access from the main menu, allowing
also the access to other secondary components where activities and reports performed by the project which contributed to the creation of it are reflected.
concLusIon We would like to point out several of the advantages that the so-called e-learning processes can provide in the assessment processes of students: •
•
•
•
• •
Offer a great temporal flexibility, allowing the student to choose the best moment to develop it, that is to say, the student has more options to decide when to carry out the assessment. Flexibility of spaces, the student chooses the place from where to accomplish the assessment; the only need is a computer connected to internet. It is possible to offer the student an immediate feedback, emphasizing this way the correction of mistakes in the moment they are made. In this section we should emphasize that the quantity of feedback is not as important as its quality. The incorporation of information and communication technologies motivated students means a novelty methodology, not previously used with an assessment purpose, what entails more interest and a better willingness of the student to carry out the tests. The online evaluation can be easily administrated. Different instruments, strategies or assessment tools can be used, we are not only talking about fulfilling an “objective” test through internet, but we can use other means such as the participation in different activities offered, as forums, chats, discussion groups, or elaboration of tasks of varied types (investigation projects, professional activities design, etc.)
Designing an Online Assessment in E-Learning
On the other hand, and as a result of the e-learning experience of the enterprise Cisco Networking AcademyTH Program, Behrens, Collison, and Demark (2006) set the seven C’s of the comprehensive assessment model. As a final statement, we consider it necessary to design projects of interdisciplinary investigation in the field of e-assessment, which join specialists in the content to evaluate, specialists in formation processes or pedagogics, and people in charge of the computer area. We have also observed, through the contact with various experiences in international environments, that the formation of interdisciplinary working teams with institutional supply will be a decisive development factor, essential in this subject.
AcknowLedgment We want to acknowledge the support received by the Investigation Group GRIAL (E-learning and Interaction Investigation Group) and by the
Orientation and Educative Assessment (GE2O) of the University of Salamanca and, especially, by the Investigation Project i+d: “E-Learning Platform Based on the Management of Knowledge, Learning Objects Libraries and Adaptive Systems (KEOPS Project)” (Reference TSI2005-00960), in which context we are developing the e-assessment investigation.
Future reseArch dIrectIons This section describes future research directions, opportunities, or additional ideas offered by the author of chapter on the main focus of the chapter or related areas. This section will be highly beneficial to other colleagues and other researchers worldwide, in particular for doctoral students constantly in search of additional research areas. Research topics that can open up in this field are: •
To carry out empiric research through experimental designs to check the effectiveness of use of ICTs in learning assessment.
Table 4. Summary of seven C’s of Networking Academy comprehensive assessment model (Behrens, et al., 2006, p. 232) Area
Goal
Examples
Claims
Clarity of design
Develop content following claims implies evidence implies tasks model. Make delivery system flexible to match different assessment goals.
Collaboration
Embed development in instructional community
Involve instructors in content development via online authoring tool. Establish online feedback mechanism.
Complexity
Leverage digital technologies to make assessments as complex as appropriate
Flexible scoring technologies. Use of simulation and automated scoring of hands-on tasks. Linking to curricula.
Contextualization
Translate and localize content for stakeholders around the world
Authoring and delivery tools that support complex language formats. Localization and review processes to ensure appropriate translation and delivery.
Analyze data to improve and revise content
Use of classical and IRT models to analyse results. Flexible delivery system allows false revision
Communication
Empower stakeholders with data
Multilevel grade book. Item information pages and summary reports.
Coordination
Develop assessment system in context of other assessment and learning activities
Linking of curricular assessments with external capstone assessments by design and statistics.
Computing
Designing an Online Assessment in E-Learning
•
•
•
•
•
To propose formation models through the Internet, establishing the contrast between different tools of learning assessment. To compare the effectiveness and efficiency of different tools of learning assessment in an experimental way and through the Internet. To propose evaluation strategies in problemsolving learning in environments of online training. To design strategies or alternatives to formative assessment in e-learning, adapted to different learning styles. To research in the field of qualitative evaluation strategies in e-learning.
reFerences Ashton, H. S., Beevers, C. E., Milligan, C. D., Schofield, D. V., Thomas, R. C., & Youngson, M. A. (2006). Moving beyond objective testing in online assessment. In S.L Howell & M. Hricko (Eds.), Online assessment and measurement. Case studies from higher education, K-12 and corporate (pp. 116-128). London: Information Science Publishing. Behrens, J.T., Collison, T.A., & DeMark, S. (2006). The seven C’s of comprehensive online assessment: Lessons learned from 36 million classroom assessments in the Cisco networking academy program. In S.L Howell & M. Hricko (Eds.), Online assessment and measurement. Case studies from higher education, K-12 and corporate (pp. 229-245). London: Information Science Publishing.
Charman, D. (2005). Issues and impacts of using computer-based assessments (cbas) for formative assessment. In S. Brown, J. Bull & P. Race (Eds.), Computer-assisted assessment in higher education (pp. 85-93). Eastbourne: Routledge. García Carrasco, J., Pérez, M.A., Rodríguez, B., & Sánchez, M.C. (2002). Evaluar en la red. Teoría de la Educación: Educación y Cultura en la Sociedad de la Información, 3(5). Retrieved June 11, 2007, from http://www.usal.es/~teoriaeducacion/ Gibbs, G. (2006). Why assessment is changing. In C. Bryan & K. Clegg (Eds.), Innovative assessment in higher education (pp. 11-22). New York: Routledge. Joint Committee on Standards for Educational Evaluation. (1988). The personnel evaluation standards. Thousand Oaks, CA: Corwin Press. Klenowski, V. (2002). Developing portfolios for learning and assessment: Processes and principles. London: RoutledgeFalmer. Margerum-leys, J., & Bass, K.M. (2006). Electronic tools for online assessment: An illustrative case study from teacher education. In S.L. Howell & M. Hricko (Eds.), Online assessment and measurement. Case studies from higher education, K-12 and corporate (pp. 62-81). London: Information Science Publishing. Mcalpine, M (2002). Principles of assessment. Luton: CAA Centre. Mishra, S. (2002). A design framework for online learning environments. British Journal of Educational Technology, 33(4), 493-496.
Brown, G., Bull, J., & Pendleburg, M. (1997). Assesing student learning in higher education. London: Routledge.
O’Donovan, B., Price, M., & Rust, C. (2004). Know what I mean? Enhancing student understanding of assessment standars and criteria. Teaching in Higher Education, 9(3), 325-336.
Bull, J., & Mckenna, C. (2001). Blueprint for CAA. Loughborough: University of Loughborough.
Pérez Juste, R. (2006). Evaluación de programas educativos. Madrid: La Muralla.
Designing an Online Assessment in E-Learning
Pérez Juste, R., López, F., Peralta, M.D., & Municio, P. (2004). Hacia una educación de calidad. Gestión, instrumentos y evaluación. Madrid: Narcea. Popham, W.J. (1983). Problemas y técnicas de evaluación educativa. Madrid: Anaya Robinson, A., & Udall, M. (2006). Using formative assessment to improve student learning through critical reflection. In C. Bryan & K. Clegg (Eds.), Innovative assessment in higher education (pp. 92-99). Oxon: Routledge. Rodríguez-Conde, M.J. (2006). Teaching evaluation in an e-learning enviroment. In E. Verdú, M.J. Verdú, J. García & R. López (Eds.), Best practices in e-learning. Towards a technology-based and quality education (pp. 55-70). Valladolid: BEM. Rodríguez-Conde, M.J., & Herrera-García, M.E. (2005). Assessment processes in tele-learning programmes. In F.J. García, J. García, M. López, R. López & E. Verdú (Eds.), Educational virtual spaces in practice the Odiseame approach. (pp. 161-178). Barcelona: Ariel.
tion enviroment. In S.L. Howell & M. Hricko (Eds.), Online assessment and measurement. Cases Studies from higher education, K-12 and corporate. (pp. 98-115). London: Information Science Publishing. Zabalza, M.A. (2001). Evaluación de los aprendizajes en la Universidad. In A. García-Valcárcel (Ed.), Didáctica Universitaria (pp.261-291). Madrid: La Muralla. Zakrzewske, S., & Steven, C. (2003). Computerbased assessment. Quality assurance issues, the hub of the wheel. Assessment & Evaluation in Higher Education, 28(6), 609-623.
AddItIonAL reAdIng Books Bull, J., & Mckenna, C. (2001). Blueprint for CAA. Loughborough: University of Loughborough.
Rosales, C. (1990). Evaluar es reflexionar sobre la enseñanza. Madrid: Narcea.
Falchikov, N. (2004). Improving assessment through student involvement: Practical solutions for aiding learning in higher and further education. London: Routledge.
Scriven, M. (1967). The methodology of evaluation. En perspectives on curriculum evaluation (AERA Monograph Series on Curriculm Evaluation, n.1). Chicago: Rand McNally.
Howell, S.L., & Hricko, M. (2006). Online assessments and measurement: Foundations and challenges. Hershey, PA: IGI Global.
Stuffebeam, D.L. (1999). Foundational models for 21st century program evaluation. Kalamazoo, MI: Western Michigan University, The Evaluation Center. Warburton, B., & Conole, G. (2003). Key findings form recent literature on computer-aided assessment (pp. 1-19). ALTC-C University of Southampton. Wise, J. C., Lall, D., Shull, P. J., Sathianathan, D., & Lee, S. H. (2006). Using Web-enabled technology in a performance-based accredita-
Journal Articles Alfonseca, E., Carro, R.M., Freire, M., Ortigosa, A., Pérez, D., & Rodríguez, P. (2005). Authoring of adaptive computer assisted assessment of freetext answers. Educational Technology & Society, 8(3), 53-65. Challis, D. (2006). Committing to quality learning through adaptive online assessment. Assessment and Evaluation in Higher Education, 30(5), 519-527.
Designing an Online Assessment in E-Learning
Lutticke, R. (2004). Problem solving with adaptive feedback. Lecture Notes in Computer Science, 3137, 417-420. Mason, R., Pegler, C., & Weller, M. (2004). Eportfolios: An assessment tool for online courses. British Journal of Educational Technology, 35(6), 717-727. Ricketts, C., & Wilks, S.J. (2002). Improving student performance through computer-based assessment: Insights from recent research. Assessment and Evaluation in Higher Education. 27(5), 475-479. Rovai, A.P. (2000). Online and traditional assessments: What is the difference? The Internet and Higher Education, 3(3), 141-151. Russell, J., Elton, L., Swinglehurst, D., & Greenhalgh, T. (2006). Using the online environment in assessment for learning: A case-study of a Web-based course in primary care. Assessment & Evaluation in Higher Education, 31(4), 465478. Sim, G., Holifield, P., & Brown, M. (2004). Implementation of computer assisted assessment: Lessons from the literature. ALT-J, Research in Learning Technology, 12(3), 216-229. Underhill, A.F. (2006). Theories of learning and their implications for online assessment. Turkish Online Journal of Distance Education, TOJDE, 7(1), 165-174. Valenti, S., Neri, F., & Cucchiarelli, A. (2003). An overview of current research on automated essay grading. Journal of Information Technology Education, 2, 319-330 Weller, M. (2002). Assessment issues on a Webbased course. Assessment & Evaluation in Higher Education, 27(2),109.
Internet Examples and Projects about E-assessment. Retrieved October 29, 2007, from http://www.elearn. malts.ed.ac.uk/services/CAA/index.phtml Computer Aided Assessment, CAA, University of Edinburgh, UK. Retrieved October 29, 2007, from http://www.heacademy.ac.uk/633.htm Retrieved October 29, 2007, from http://www. jiscinfonet.ac.uk/ (a good place to search for information on e-learning in the UK) Retrieved October 29, 2007, from http://assessment.cetis.ac.uk/ Retrieved October 29, 2007, from http://www. caaconference.com/ (International Computer Assisted Assessment Conference: Research into e-assessment) Retrieved October 29, 2007, from http://www. questionmark.com/us/glossary.aspx (Testing and Assessment Glossary of Terms)
cases studies JISC ITT: E-Assessment. Case Studies of effective and innovative practice in the area of e-assessment. A joint project between the Open University and the University of Derby. (September 2005 - March 2006). Dr Denise Whitelock (Lead), Simon Rae, Hassan Sheikh (all OU). Professor Don Mackenzie, Christine Whitehouse, Cornelia Ruedel (all UoD). Retrieved October 29, 2007, from http:// kn.open.ac.uk/public/index.cfm?wpid=4927 Effective Practice with E-Assessment. An overview of technologies, policies, and practice in further and higher education. Author: The Joint Information Systems Committee (JISC) supports UK post-16 and higher education and research by providing leadership in the use of Information
Designing an Online Assessment in E-Learning
and Communications Technology in support of learning, teaching, research and administration. JISC is funded by all the UK post-16 and higher education funding councils. www.jisc.ac.uk. Effective Practice with E-Assessment is the third in a series of JISC publications on the skillful use of e-learning in 21st century practice in a technology-rich context. In contrast to the preceding guides in the series—Effective Practice with e-learning and Innovative Practice with E-Learning—the focus of this publication is on practice in a broader institutional sense, including the potential impact of e-assessment on learning and teaching. Retrieved October 29, 2007, from http://www.jisc.ac.uk/media/documents/themes/ elearning/effprac_eassess.pdf The CAMEL Project: Collaborative Approaches to the Management of E-Learning. CAMEL was a project funded by the HEFCE Leadership, Governance, and Management programme which set out to explore how institutions who were making effective use of e-learning and who were collaborating in regional lifelong learning partnerships might be able to learn from each other in a Community of Practice based around study visits to each of the partner institutions. This short publication highlights some of the things CAMEL participants found out about e-learning and about each other. Retrieved October 29, 2007, from http://www. jiscinfonet.ac.uk/publications/camel
Electrical and Electronic Engineering Assessment Network. The project has built a set of peer reviewed test bank questions for electrical and electronic engineering. It has also established and built a network in order to identify and disseminate good practice in Engineering Assessment to UK Academics. Retrieved October 29, 2007, from http://www.e3an.ac.uk/
endnotes 1
2
3 4
5
The Joint Committee on Standard for Educational Evaluation (1988: 38-39) establishes four necessary characteristics to carry out an assessment as correct as possible; the cited characteristics are: Usefulness, Feasibility (viability), Legitimacy (honesty or ethical integrity) and precision. SCROLLA, Scottish Centre for research into online learning and assessment http://www. scrolla. ac.uk http://www.caacentre.ac.uk http://www.lboro.ac.uk/service /ltd/flicaa/ index.html http://www.brookes.ac.uk.services/ocsd/
Chapter XVIII
Quality Assessment of E-Facilitators Evelyn Gullett U21Global Graduate School for Global Leaders, Singapore
ABstrAct Organizations, in particular HR/Training departments, strive to set forth good practices, quality assurance, and improvement on a continuing basis. With the continuous growth of online university programs, it is crucial for e-learning establishments to include service quality assessments along with mechanisms to help e-facilitators consistently maintain the highest quality standard when lecturing, teaching, guiding, administering, and supporting the online learner. This chapter discusses the application of an e-quality assessment matrix (e-QAM) as part of a quality assessment model that promotes continuous improvement of the e-learning environment. This model will serve as a tool for online universities and organizations to achieve a base standard of consistent quality that is essential for program accreditation and satisfaction of global customers.
IntroductIon Both for-profit and nonprofit organizations aim to achieve consistent quality in order to maintain leadership in a competitive global market. Higher educational institutes are becoming professional businesses (Holmes & McElwee, 1995). With the continuous growth of online university programs, also referred to as e-universities, e-learning establishments, or virtual universities, it is important
that service quality assessments help e-facilitators consistently maintain the highest quality standard when lecturing, teaching, guiding, administering, and supporting the online learner. E-facilitators include professors, adjunct faculty, and educators, as well as organizational trainers, who conduct teaching and training online. This chapter discusses the application of a quality assessment model for e-facilitators that promotes continuous improvement of the e-learn-
Copyright © 2008, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
Quality Assessment of E-Facilitators
ing environment and helps to achieve a standard of consistent quality, which is crucial for accreditation and quality management.
BAckground E-education and e-training, which take place in a virtual online environment, allow knowledge to be shared, created, and enhanced on a global platform. With the growing popularity of these programs, universities are under pressure to maintain the quality of their online curricula and their delivery methods in order to remain competitive. Assessing the quality of e-professors and e-facilitators will assist in the advancement of online learning on many fronts.
QuALIty In hIgher educAtIon Quality can be defined as “an essential and distinguishing attribute of something or someone” (Webster’s Online Dictionary, 2006). There are various ways to measure quality in the academic sector. Dahlgaard, Kristensen, and Kanji (1995) define total quality education as an active joint effort of both students and employees towards continuous improvement and a high level of customer satisfaction. Cheng (1996) uses a system approach to define education quality by adding input, output, and processes as components that deliver services to exceed internal and external stakeholders’ satisfaction. Winch (1996) adds user-and value-based quality to the definition. Other studies focus on organizational inputs and outputs (Cave, Hanney, Kogan, & Trevett, 1988; Johnes & Taylor, 1990) or on the processes (Green, 1994) of the same. This chapter address the following elements of quality: performance, for example, the reliability, timeliness, and competence of e-facilitators; conformance, or the consistency of service; and the use of appropriate design tools to assess
quality. The delivery of online education also includes customer-based aspects of quality, such as competence, responsiveness, reliability, access, courtesy, communication, credibility, security, understanding, and knowing (Zeithaml, Parasuraman, & Berry, 1990).
totAL QuALIty mAnAgement (tQm) Although Deming (1989), Juran (1988), and Crosby (1979) first applied the concept of quality to their teachings in the 1950s, the concept of total quality control (TQC) was introduced a decade later by Feigenbaum (1951). He defines total quality as a control mechanism that impacts the entire organization. TQM is known for its quality awards, such as the Deming Prize (Japan), Baldrige (USA), and the European Foundation of Quality Management (EFQM). Teamwork, customer focus, leadership, training, and continuous improvement tools are essential for a successful TQM program implementation (Sirvanci, 2004). Some scholars believe that TQM is a concept rather than a method (Burr, 1993); this may be one of the reasons why there is no universally defined theory of TQM (Sahney, Banwet, & Karunes, 2004). Nevertheless, Evans and Lindsay (1999) present some common TQM views: •
•
•
•
The goal is to deliver maximum satisfaction to the customer (or what the customer perceives to be maximum satisfaction). The entire organization, at all levels and functions, is involved to meet and exceed that goal. The TQM concept remains flexible and adaptable toward change and the evolution of the organization, its industry, and its customers. The responses to customer inquiries are delivered consistent, accurate, and timely, and waste is eliminated at every level.
Quality Assessment of E-Facilitators
The goal of the TQM revolution was to meet and exceed customer expectations by applying a continuous improvement processes through an integrated system of tools, techniques, and training. Similar TQM goals have developed in the higher educational arena.
tQm in higher education Originally concentrating on production and operation management, quality today embraces government, service organizations, healthcare institutions, the entire private sector, and educational organizations (Helms, Williams, & Nixon, 2001). Harvey and Knight (1996) find various ideas of quality in higher education, namely exceptional quality, perfect or consistent quality, quality fit for purpose, and quality as a value for money, and quality that transforms. Because TQM is a dynamic theory that encompasses many concepts, it is difficult to find a comprehensive definition of quality or TQM in higher education (Chadwick, 1994; McCulloch, 1993; Taylor & Hill, 1993). It is important to realize that the total quality of education at every level is a continuous dynamic interaction of all systems, that is, the university, support staff, administration, policies, organizational culture, processes and procedures, faculty, as well as the interaction of students as a system. Unfortunately, empirical evidence regarding TQM methods applied in universities is restricted to administrative functions, such as financial aid and registration that comprise the nonacademic side of the university (Koch & Fisher, 1998; Owlia & Aspinwall, 1996). Some researchers suggest that the application of TQM methods will “unite campuses, increase employee satisfaction, and improve nearly any process that it touches” (Koch & Fisher, 1998, p. 659).
0
Universities and their accrediting bodies provide evidence of a growing interest in TQM applications in higher education (Mergen, Grant, & Widrick, 2000). While TQM matters appear to be addressed in productivity and financial areas (Helms et al., 2001), changing attitudes and perceptions of the global customer requires a new response to meet and exceed his or her needs. Thus, a consistent effort to improve teaching must be embraced through a continuous quality improvement project.
Evaluation as Part of TQm in higher education One constituent of a TQM project in higher online education is the evaluation of the e-facilitator. When evaluation takes place, quality is acknowledged (Stake, 1999). In other words, we look for excellence and improvement in performance by systematically determining the merit and significance of the evaluation taking place (Scriven, 1999). The traditional approach in the college setting evaluates the performance of each individual instructor in the classroom (Stake & Cohernour, 1999). Student evaluations are commonly used to evaluate instructors in both face-to-face and online settings; they are also used for faculty development recommendations (Scriven, 1995). While this evaluation tool may be helpful when integrated into the overall faculty evaluation, its validity would be limited if it were the only source assessment or evaluative source considered. Thus, we may consider the presence of an online e-faculty mentor or supervisor who performs periodic quality checks throughout the semester by applying some type of evaluation tool. Quality assessment of performance and consistency of service appear to be missing in online universities. The next section will discuss this in more detail.
Quality Assessment of E-Facilitators
Issues In QuALIty Assessment oF e-FAcILItAtors the rationale for a Quality Assessment model The quality assessment model resulted from the authors’ experience as a former HR professional and e-facilitator for different universities. Only two of the five universities (both of which were true virtual schools) offered some type of training program for new e-faculty members; these programs varied in length, thoroughness, intensity, and rigor. It was not a priority to communicate the expectations of basic quality standards to new e-faculty members. In addition to having a high quality training program, which really serves as the interview for potential e-faculty candidates, one of the universities that offered a training program offers a mentor program in which a full-time e-facilitator monitors the course of another e-faculty colleague. Nevertheless, there is no quality feedback given during the time that the e-faculty member is teaching the course, which would be useful for making immediate improvements; only a short evaluation is provided at the end of the class. While most colleges provide surveys in which students rate the overall course content and the e-facilitator, this is insufficient for quality improvement as far as the e-professor is concerned. Who monitors consistent quality assessments of product delivery via the e-facilitator? How is the quality of the e-facilitator measured? What methods are in place to maintain the highest quality standards? There is a need for a quality assessment tool that can be used to provide support and to consistently maintain good practices while the e-facilitator is teaching online. Moreover, such a tool can be linked to the e-facilitator’s annual performance appraisal. Online teaching requires an adjustment of methods. Unlike traditional face-to-face settings, applied online pedagogy requires faculty
to adjust to the latest technology and to enhance student-centered learning based on cooperation and collaboration. There appears to be a correlation between what students perceived as quality service and the behavior displayed by faculty and administrators (Ham, 2003). Addressing quality and assessment during training and development sessions modifies faculty behavior, resulting in an immediate impact on the students’ perception of quality. Helms et al. (2001) argue that most university faculty in the U.S. are evaluated on their performance in teaching, research, and service and may be a part of the quality problem of higher education. For example, some e-facilitators poorly apply pedagogical methods, are not present in discussion areas, do not provide detailed and constructive feedback, or do not reply to questions in a timely manner. The University of Wisconsin won the first Baldrige education award as a higher education institution in 2002, making the successful application of TQM methods worthwhile criteria for consideration on a global level (Daniels, 2002). Since then, some 160 universities in the U.S. have actively applied TQM methods, and 50% of those universities have created quality councils (Burkhalter, 1996). Total quality in online education requires quality assessment and management tools that develop teaching and learning. There are not enough e-TQM methods currently in place to address the more significant problems (such as consistent quality delivery of teaching by e-facilitators) facing higher education in the delivery of distance learning.
soLutIons And recommendAtIons Deming (1989) argues that continuous quality improvement will ultimately lead to higher customer satisfaction and lower costs as a result
Quality Assessment of E-Facilitators
of fewer mistakes and delays, and better use of time, materials, and competencies, allowing firms to achieve sustainable competitive advantages. When reviewing candidates for accreditation, the American Assembly of Collegiate Schools of Business (AACSB) focuses on how continuous quality improvement is integrated into the university culture. The e-quality assessment matrix (e-QAM), as part of the e-quality management (e-QM) model, meets these criteria and bridges the gap between missing TQM methods in online education. Evaluation allows us to construct and to communicate a certain level of quality (Stake, 1999). The e-QAM serves as a standard against which the e-facilitator’s performance should be evaluated and compared. This matrix can assist online universities in their continuous quality assurance, improvement, and management efforts in the delivery of e-teaching and e-learning.
the e-Quality Assessment matrix (e-QAm) The e-QAM (Table 1) evaluates the e-facilitator in six areas: presence, classroom organization and environment, interaction with students, discussion, and feedback. The e-facilitator’s daily presence on the discussion board encourages students to engage in conversation; responding quickly in a respectful tone and encouraging questions conveys professionalism. Preparing the necessary administrative components of the course, such as e-mailing and posting the e-facilitators introduction, sending a welcome e-mail to the students inviting them to ask questions any time throughout the course are examples of creating an open and collaborative online environment. Furthermore, e-mailing students additional guidelines regarding course expectations, such as late assignments for example, the discussion board protocol, or even guiding them to helpful sections within the course ware are also elements of assessment for the organization and environment criteria.
Interacting with students in a respectful manner, consistently applying a friendly and enthusiastic tone, being aware and sensitive to cultural differences, assisting and guiding them to a successful completion of the course are elements of assessment in the student interaction criteria. The discussion and feedback criteria assess how well e-facilitators respect student diversity and how well they provide constructive, detailed, and meaningful feedback. The matrix describes the expected performance of the e-facilitator; these activities are constructed quality standards of practice that should be delivered continuously. In addition, target activities for outstanding performance are presented as a TQM approach to exceed quality commitment to delivering e-teaching and e-learning. In contrast to the typical evaluation approach, this quality assessment tool relies on an alreadyexisting category level of good, consistent quality that is taken to the highest level by providing continuous feedback. This tool is more formative and positive to the individual being assessed: by focusing on best quality practices, it gives eprofessors something toward which to strive. All quality criteria in the e-QAM are interrelated to achieve outstanding quality performance. Application of the e-QAM provides detailed and applicable feedback to the e-facilitator, allowing him or her to improve continuous quality efforts. The matrix can also serve as a tool to enhance intercultural communication and understanding between facilitators. The e-QAM respects the knowledge that each individual can bring to the online classroom while integrating quality criteria. The evaluation criteria of the e-QAM (if met and exceeded by the e-facilitator) may also affect how the online learning environment is perceived by students. As a leadership tool, the e-QAM serves sets forth the expectation of certain behavior and allows for the review, assessment, and planning of training and development approaches.
Quality Assessment of E-Facilitators
Table 1. The e-quality assessment matrix
Expected Quality Performance & Activities
Target Activities: OUTSTANDING CONSISTENT Quality Performance
Presence
Actively visible in the classroom 5 days a week • Absent from class no more than 2 consecutive days • Posts 10% of the communications in the classroom
• • • •
Actively visible and felt in the classroom 7 days a week Absent from class less than 2 consecutive days Posts 18-20% of the communications in the classroom Replies to e-mails and questions posted on the discussion board within 24 hours
Classroom Organization & Environment
•
Sets up the entire course and is ready for students 3 days prior to the session start date Posts a welcome announcement & introduction Responds to e-mails and provides appropriate guidance for helping students get started
•
Sets up the entire course and is ready for students one week prior to session start Posts a welcome announcement/introduction and e-mails the same to all students Posts reminder announcements regarding assignments and due dates Provides helpful guidelines such as discussion protocol, grading criteria, and overall expectations Creates an environment that encourages questions and participation Creates a supportive, student-centered learning environment Guides students throughout the course with appropriate e-mails/postings
Maintains a friendly and professional attitude in all communications with students at all times Enthusiastic, encouraging and supportive when providing feedback and assisting students with problems and questions Addresses the needs and abilities of individual students Assists students with problems that may impede the successful completion of the course, directing students to tutorial services as appropriate
• • • • • • • •
•
•
•
Interaction with Students
•
•
•
•
• • • • • •
Displays enthusiasm towards e-learning Encourages student-to-student and student-to-professor dialogue Encourages questions Respects diverse opinions Displays cultural sensitivity Provides continuous support at all times Respectfully communicates the importance of timeliness Identifies student problems early on and ensures that students receive the appropriate assistance needed to complete the course successfully
continued on following page
Quality Assessment of E-Facilitators
Table 1 continued Discussion
•
•
•
Feedback
•
•
•
Encourages collaborative learning and active student involvement Fosters a highly interactive learning environment by showing his/her presence in the discussion area Facilitates a meaningful, ongoing dialog and applies real-world professional experiences in the classroom
•
Assesses student performance based grading criteria and course competencies within 10 days of the assignment due date Provides detailed, constructive meaningful, and appropriate feedback on student assignments Replies to e-mails within 48 hours
•
• • • •
• • •
Reads discussion postings daily and provides feedback within 24 hours Respects diversity in students and different ways of learning Shows presence and interest by probing students further to deepen discussions Encourages experienced based learning by asking students to share their experiences when discussing theory Encourages cooperation, collaboration, and participation among students
Assesses student performance within 7 days of the assignment due date Provides detailed, constructive meaningful feedback Provides detailed feedback of written analysis on theory and application when appropriate Advises the student of precise where improvement is expected and how they can achieve it
concLusIon Competition in e-education is becoming fierce as institutions seek to gain positions in the global marketplace. Consequently, e-universities must focus on service quality at every level. According to Orsini (2000), colleges really do not make quality a part of their mission statement and are unable to agree on who their customers are. Thus, if e-universities embark on the next logical move of applying a rigorous e-quality management evaluation system effectively, then continuous improvement and quality of service to the student will be embraced as a growing movement. Consequently, the e-quality assessment matrix (e-QAM) presented in this chapter would be a solid starting point to give performance feedback to the e-facilitator to achieve and maintain high quality standards consistently. The performance
and conformance of quality are tied to the quality of e-training and development. Applying the e-QAM will allow online educational programs to monitor, guide, and support e-facilitators by assessing their performance on specific criteria important to quality in the online educational environment and train and develop them in areas they need to improve in. If quality is consistently displayed by every e-facilitator throughout the online university, then students will realize that quality is a part of this organization’s culture, which may lead to higher student satisfaction and higher enrollment numbers. The e-QAM holds both the e-facilitator and the e-institution responsible for providing a high level of quality. The model may be used as a strategic e-TQM planning element to identify training and development needs, monitor progress, and continually enhance maximum quality of e-teaching and e-learning.
Quality Assessment of E-Facilitators
Future reseArch dIrectIons The e-quality management model serves as a solid starting point for maintaining high quality standards. Future research should compare TQM model applications to e-educational environments. Another area of interest is how the e-QM model aligns with the organizational mission and HR recruitment efforts of the university. Considering that the range of student feedback in face-to-face settings are an element of value during facilitator evaluations (Scriven, 1999), other studies should examine the relationship between the e-QM model and student survey responses, and the alignment of both sources of evaluation. With the question of quality in mind, additional research should explore how different class dynamics, such as personality, for example, influence the e-professor’s performance and the outcome of student evaluation on the e-facilitator. Bearing in mind that every individual has a different construct of quality (most frequently defined by personal experience), quality occurs the moment the individual recognizes its occurrence (Stake, 1999). Thus, future research should consider asking how each, the student and the e-facilitator, construct quality This should then be compared to the quality standards set for in the e-facilitators evaluation. The concept of evaluating teaching as a community practice (Stake & Cohernour, 1999) should also be investigated in virtual universities. In the online environment, e-facilitators are geographically dispersed; thus, both individual and team contributions must be considered when evaluating teaching in the online environment. A case study may look at the effectiveness of e-facilitator development and growth in teaching skills by promoting class visitations by peer facilitators. A self-assessment component may be added to the evaluation model. Finally, the psychological impact of evaluations on e-facilitators should also be considered.
Continuous review of this model will produce continuous quality improvement in e-teaching and learning, resulting in personal and professional development growth for the student, the individual, and the entire organization.
reFerences Burkhalter, B.B. (1996). How can institutions of higher education achieve quality within the new economy? Total Quality Management, 7, 153-160. Burr, J.T. (1993, March). A new name for a not so new concept. Quality Progress, 87-88. Cave, M., Hanney, S., Kogan, M., & Trevett, G. (1988). The use of performance indicators in higher education: A critical analysis of developing practice. London: Jessica Kingslay. Chadwick, P.A. (1994). University’s TQM initiative. In P. Nightingale & M. O’Neill (Eds.), Achieving quality learning in higher education (pp. 120-135). London: Kogan Page. Cheng, Y.C. (1996). The pursuit of school effectiveness: Theory, policy, and research. Hong Kong: Hong Kong Institute of Educational Research, The Chinese University of Hong Kong. Crosby, P.B. (1979). Quality is free. New York: McGraw-Hill. Dahlgaard, J.J., Kristensen, K., & Kanji, G.K. (1995). TQM and education. Total Quality Management, 6(5-6). Daniels, S.E. (2002). First to the top. Quality Progress, 35(5), 41-53. Deming, W.E. (1989). Foundation for management of quality in the western world. New York: Perigee Books. Evans, J.R., & Lindsay, W.M. (1999). The management and control of quality (4th ed.). Cincinnati, OH: South-Western College Publishing.
Quality Assessment of E-Facilitators
Feigenbaum, A.V. (1951). Quality control: Principles, practice, and administration. New York: McGraw-Hill.
McCulloch, M. (1993). Total quality management: Its relevance for higher education. Quality Assurance, 1(2), 5-11.
Green, D. (Ed.). (1994). What is quality in higher education? (pp. 3-20). Buckingham, UK: Open University Press.
Mergen, E., Grant, D., & Widrick, S.M. (2000). Quality management applied to higher education. Total Quality Management, 11(3), 345-352.
Ham, C.L. (2003). Service quality, customer satisfaction, and customer behavioral intentions in higher education. Published doctoral dissertation AAT 3090234. Nova Southeastern University, FL.
Orsini, J.N. (2000). Profound education. Total Quality Management, 11(4-6), 762-766.
Harvey, L., & Knight, P.T. (1996). Transforming higher education. Buckingham, UK: Open University Press. Helms, M.M., Williams, A.B., & Nixon, J.C. (2001). TQM principles and their relevance to higher education: The question of tenure and post-tenure review. The International Journal of Education Management, 14(6-7), 322-331. Holmes, G., & McElwee, G. (1995). Total quality management in higher education how to approach human resource management. Total Quality Management, 7(6), 5. Johnes, J., & Taylor, J. (1990). Performance indicators in higher education. Buckingham: Open University Press. Juran, J.M., & Gyrna, F.M. Jr. (1988). Juran’s quality control handbook. New York: McGrawHill. Kerlin, C.A. (2000). Measuring student satisfaction with the service processes of selected student educational support services at Everett Community College. Published Dissertation AAT9961458. Oregon State University. Koch, J.V., & Fisher, J.L. (1998). Higher education and total quality management. Total Quality Management, 9(8), 659-668.
Owlia, M.S., & Aspinwall, E.M. (1996) A framework for the dimensions of quality in higher education. Total Quality Management, 7(2). Ross, J.A., & Bruce, C.D. (2007). Teacher selfassessment: A mechanism for facilitating professional growth. Teaching & Teacher Education: An International Journal of Research and Studies, 23(2), 146-159. Sahney, S., Banwet, D.K., & Karunes, S. (2004). Conceptualizing total quality management in higher education. The TQM Magazine, 16(2), 145-159. Scriven, M. (1995). Student ratings offer useful input to teacher evaluations. Washington, DC: ERIC Clearinghouse on Assessment and Evaluation, Catholic University of America. Scriven, M. (1999). The nature of evaluation (Pts. I–II). Washington, DC: ERIC Clearinghouse on Assessment and Evaluation. Sirvanci, M.B. (2004). TQM implementation: Critical issues for TQM implementation in higher education. The TQM Magazine, 16(6), 382-386. Stake, R.E. (1999). Representing quality in evaluation. Paper presented at the annual meeting of the American Educational Research Association, Quebec, Canada. Stake, R.E. & Cohernour, E.J. (1999). Evaluation of college teaching in a community of practice. University of Illinois.
Quality Assessment of E-Facilitators
Taylor, A.W., & Hill, F.M. (1993). Issues for implementing TQM in further and higher education: The moderating influence of contextual variables. Quality Assurance in Education, 1(2), 12-21. Webster’s Online Dictionary. (n.d.). Retrieved October 30, 2007, from http://www.webstersonline-dictionary.org/definition/quality Winch, C. (1996). Quality in education. Oxford: Blackwell. Zeithaml, V.A., Parasuraman, A., & Berry, L.L. (1990). Delivering quality service: Balancing customer perceptions and expectations. New York: Free Press.
AddItIonAL reAdIng Attinello, J.R., Lare, D., & Waters, F. (2006). The value of teacher portfolios for evaluation and professional growth. NASSP Bulletin, 90(2), 132-152. Aydin, C.H. (2005). Turkish mentors’ perception of roles, competencies and resources for online teaching, online submission. Turkish Online Journal of Distance Education, 6(3), 8-12. Briggs, S. (2005). Changing roles and competencies of academics. Active Learning in Higher Education: The Journal of the Institute for Learning and Teaching, 6(3), 256-268. Bronmann, L., Mittag, S., & Danie, H.-D. (2006). Quality assurance in higher education-metaevaluation of multi-stage evaluation procedures in Germany. Higher Education: The International Journal of Higher Education and Educational Planning, 52(4), 687-709. Coppola, N.W., Hiltzt, S.R., & Rotter, N.G. (2002). Becoming a virtual professor: Pedagogical roles and asynchronous learning networks. Journal of Management Information Systems, 18(4), 169-189.
Darabi, A.A., Sikorski, E.G., & Harvey, R.B. (2006). Validated competencies for distance teaching. Distance Education, 27(1), 105-122. Davies, G., & Stacey, E. (2003). Quality education @ a distance. Boston, MA: Kluwer Academic Publishers. Demirbolat, A.O. (2006). Education faculty students’ tendencies and beliefs about the teacher’s role in education: A case study in a Turkish university. Teaching & Teacher Education: An International Journal of Research and Studies, 22(8), 1068-1083. Dommeyer, C.J., Baum, P., Hanna, R.W., & Chapman, K.S. (2004). Gathering faculty teaching evaluations by in-class and online surveys: Their effects on response rates and evaluations. Assessment & Evaluation in Higher Education, 29(5), 611-623. Donaldson, S.I., Gooler, L.E., & Scriven, M. (2002). Strategies for managing evaluation anxiety: Toward a psychology of program evaluation. American Journal of Evaluation, 23(3), 261-273. Easton, S.S. (2003). Clarifying the instructor’s role in online distance learning. Communication Education, 52(9), 87-105. Garcia, P., & Rose, S. (2007). The influence of technocentric collaboration on preservice teachers’ attitudes about technology’s role in powerful learning and teaching. Journal of Technology and Teacher Education, 15(2), 247-266. Gultekin, M. (2006). The attitudes of preschool teacher candidates studying through distance education approach towards teaching profession and their perception levels of teaching competency, online submission. Turkish Online Journal of Distance Education, 7(3), 184-197. Kyriakides, L., Demetriou, D., & Charalambous, C. (2006). Generating criteria for evaluating teachers through teacher effectiveness research. Educational Research, 48(1), 1-20.
Quality Assessment of E-Facilitators
Malm, B., & Lofgren, H. (2006). Teacher competence and students’ conflict handling strategies. Abdingdon, UK: Manchester University Press. Macdonald, J., & Hills, L. (2005). Combining reflective logs with electronic networks for professional development among distance education tutors. Distance Education, 26(3), 325-339. Palmer, A., & Collins, R. (2006). Perceptions of rewarding excellence in teaching: Motivation and the scholarship of teaching. Journal of Further and Higher Education, 30(2), 193-205. Ross, J.A., & Bruce, C.D. (2007). Teacher selfassessment: A mechanism for facilitating professional growth. Teaching & Teacher Education: An International Journal of Research and Studies, 23(2), 146-159. Scriven, M. (2001). Evaluation: Future tense. American Journal of Evaluation, 22(3), 301307. Scriven, M. (2002). Out of the frying pan, into the fire: Comments on Roth/Tobin. Journal of Personnel Evaluation in Education, 16(4), 303-306. Stake, R. (2004). How far dare an evaluator go toward saving the world? American Journal of Evaluation, 25(1), 103-107. Stufflebeam, D.L. (2001). Evaluation checklists: Practical tools for guiding and judging evaluations. American Journal of Evaluation, 22(1), 71-79. Stufflebeam, D.L., & Wingate, L.A. (2005). A self-assessment procedure for use in evaluation training. American Journal of Evaluation, 26(4), 544-561.
Taut, S. (2007). Studying self-evaluation capacity building in a large international development organization. American Journal of Evaluation, 28(1), 45-59. Turner, J.E., & Reed, P.A. (2004). Creation of a faculty task list for teaching in a televised distance learning environment. Journal of Industrial Teacher Education, 41(4), 1-13. Uzunboylu, H. (2007). Teacher attitudes toward online education following an online inservice program. International Journal on E-Learning, 6(2), 267-277. Villar Angulo, L.M., & Alegre De La Rosa, O.M. (2006). Online faculty development in the Canary Islands: A study of e-mentoring. Higher Education in Europe, 31(1), 65-81. Walker, S.L. (2005). Modifying formative evaluation techniques for distance education class evaluation, online submission. Turkish Online Journal of Distance Education, 6(4), 7-11. Wells, J. (2007). Key design factors in durable instructional technology professional development. Journal of Technology and Teacher Education, 15(1), 101-122. Williams, P.E. (2003). Roles and competencies for distance education programs in higher education institutions. American Journal of Distance Education, 17(1), 45-57. Wilkerson, J.R., & Lang, W.S. (2007). Assessing teacher competency: Five standards-based steps to valid measurement using the CAATS model. Thousand Oaks, CA: Sage Publications.
Chapter XIX
E-QUAL:
A Proposal to Measure the Quality of E-Learning Courses Célio Gonçalo Marques Instituto Politécnico de Tomar, Portugal João Noivo Universidade do Minho, Portugal
ABstrAct This chapter presents a method to measure the quality of e-learning courses. An introduction is first presented on the problematics of quality in e-learning emphasizing the importance of considering the learners’ needs in all the development and implementation stages. Next several projects are mentioned, which are related to quality in e-learning, and some of the most important existing models are described. Finally, a new proposal is presented, the e-Qual model, which is structured into four areas: learning contents, learning contexts, processes, and results. With this chapter, the authors aim, not only to draw the attention to this complicated issue but above all to contribute to a higher credibility of e-learning proposing a new model that stands out for its simplicity and flexibility for analyzing different pedagogical models.
IntroductIon In a society where individual skills tend to become rapidly out of date, one of the greatest challenges is to discover new ways of learning that allow the learners not only to choose what and when they want to learn, but also the most appropriate learning way for their own case.
E-learning is a clear answer to this challenge. This distance-teaching model is characterised by its flexibility. Flexibility in terms of time, allowing the students to access the contents any time they want at their own rhythm. Flexibility in terms of space as students do not need to move and have the chance to attend courses anywhere in the world. Flexibility in terms of syllabus contents
Copyright © 2008, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
E-QUAL
because module-based plans, learning contexts, and educational strategies are used according to the student’s level of knowledge (Marques, 2004). In the European Union, the European Parliament and the Council created the e-Learning Programme, a bi-annual programme (2004-2006) for the effective integration of information and communication technologies in education and training systems in Europe. This programme is the continuation of the e-Learning Action Plan (2001-2004), and has four action lines: promoting digital literacy, European virtual campuses, e-twinning of schools in Europe and promotion of teacher training, and transversal actions for the promotion of e-learning in Europe (European Commission, 2003). E-learning is also integrated in the objectives of the Information Society Technologies Programme which is part of the European Union Research Framework Programme. The Final Report Study on The e-Learning Suppliers “market” in Europe conducted on behalf of the European Commission, DG Education and Culture, estimates that in 2003/2004 the European e-learning market is 4.66 to 5.16 billion euros worth (Danish Technological Institute, 2004). The highest-value sector is workplace learning with 3.5-4.0 billion euros. The higher education sector occupies the third position with 100 million euros (60 million in e-learning technologies and 40 million in e-learning contents and services)—a value that we believe will increase considerably, a tendency resulting from this new phase in higher education governed by the challenges of the Declaration of Bologna and lifelong learning. Without quality e-learning there is no successful learning. According to the European Commission (2005), the good/bad/best practices need to be identified and systematized. Ehlers, Goertz, Hildebrandt, and Pawlowski (2005) corroborate this opinion, making reference to a lack of actual implementation and information on e-learning quality, for example, about specific quality approaches. 0
In what concerns general process-oriented approaches, we refer to International Standard ISO 9000 (ISO, 2003) as well as the total quality management (TQM) approach (Dahlgaard, Kristensen, & Khanji, 2005). Despite their recognised importance in our society, they have been revealing several usage constraints in e-learning courses (ADEIT, 2002; Ehlers, Goertz et al., 2005). As specific approaches, we refer to those directed towards the product and the process. The first ones are more centred in industry specifications (standards) for learning objects and its main promoters are the Advanced Distributed Learning Initiative (ADL), the Aviation Industry CBT Committee (AICC), the Institute of Electrical and Electronic Engineers (IEEE), the Global Learning Consortium, Inc. (IMS), the Alliance of Remote Instructional Authoring and Distribution Networks for Europe (ARIADNE), and the Dublin Core Meta-Data Initiative (DCMI). The second ones aim to ensure the quality of the whole process from the analysis of the requirements to its actual operation. Sustainable Environment for the Evaluation of Quality in e-Learning (SEEQUEL), Quality On the Line, Methodological Guide for the Analysis of Quality in Open and Distance Learning (Meca-ODL), Open eQuality Learning Standards, and InnoeLearning are among the most important projects. Within these approaches, there is still the need to analyse quality from the intervening actors’ perspective: learners, producers, and distributors (ADEIT, 2002). Another important aspect is quality from the perspective of contents and contexts. According to Figueiredo (2002), the great enthusiasts of e-learning believe that the future lies in contents. In his opinion, a significant part of that future—maybe the most significant—will not lie in the contents but in the contexts created to materialise those contents. This chapter aims to make an analysis of quality in e-learning, to explore the problem of quality evaluation in e-learning and to present some of the main evaluation models. Finally, a model (e-Qual)
E-QUAL
is proposed to evaluate the quality of e-learning courses. Considering the complexity of this sort of models, which can lead in many cases to its reduced use, the e-Qual model stands out for its simplicity and flexibility to analyse different pedagogical models.
QuALIty In e-LeArnIng The concept of quality, defined by Juran (1988) as fitness for purpose, has gained a huge significance in the educational field, particularly, in e-learning. Quality in e-learning has become a slogan for educational policies, producers, distributors, and a principal demand for learners (Ullmo & Ehlers, 2007). Although, quality in e-learning be less characterised by its precise definition than by its positive connotation (Ullmo & Ehlers, 2007), we can say that it is achieved when an e-learning experience provides just the right content at just the right time, when it helps learners master needed knowledge and skills, in a manner that they are motivated to learn and apply their learning to improve individual and organizational performance (ASTD & NGA, 2001). It is obvious that quality will determine the future of e-learning (ASTD & NGA, 2001; Berlecon Research, 2001; Connolly, Jones, & O’Shea, 2005; Ehlers, 2004; among others), but in this debate about e-learning quality a number of concepts must be analysed, which are not consensual. It has become clear that learning in the society and economy of information and knowledge requires the development of new competences, new visions, new approaches, and new models that will certainly differ from the traditional approaches rooted in industrial society. In this discussion about quality, it is more and more clear that e-learning has to be centred in learners (Husson, 2004; Ruttenbur, Spickler, & Lurie, 2000), which means that the learners’ needs should be determined in a concrete manner.
We know that quality covers a variety of areas from the technology implemented to the organisational processes and competences of the actors involved. Only in combining these areas (which used to be handled independently) will we be able to assure not only quality, but also continuous improvement and innovation, thus increasing skills level of the intervening actors. Seufert and Euler (2005) identified five dimensions in their study, which were revealed to be relevant for sustainable e-learning innovations: pedagogical, economical, technical, organisational, and cultural. According to Ehlers (2007), quality development should occur in every single development and delivery process of e-learning courses and programmes, not as an isolated evaluation approach at the end of a course. The author presents three concepts that can be used and combined to form a new, comprehensive concept of quality development: (1) “Quality development has to lead to better learning,” (2) Quality development has to take the “learner’s needs” and the “interests and requirements of the e-learning stakeholders” into account, (3) “How existing concepts, approaches and strategies can be used for quality development” (Ehlers, 2007, p. 1). Ehlers, Hildebrand, Tescheler, and Pawlowski (2004) refer to the open nature of quality as both a normative definition and a relation between supply and training needs. Today there are many different positions on the concept of quality e-learning as a resource. But it is consensual that a global quality framing for e-learning resources should be established that could be used to support developments at the different levels: supply, users, producers, and technology. Many questions and challenges concerning quality management and quality assurance in elearning have been focussed both by international organisations, national authorities, institutions, and consumers during the last 10 to 15 years (Rekkedal, 2006). In the next section, some of
E-QUAL
the most important quality approaches will be analysed.
QuALIty evALuAtIon In e-LeArnIng As it was already mentioned, quality in e-learning is difficult to measure because it depends not only on the contents or the services, but also on the interaction of the learner with those contents and services, with trainers and the technology. The perception of the quality level of an elearning product or service depends on the role played by the evaluator in the process, which is different to the producer, the distributor, and the learner. According to the Fundación UniversidadEmpresa de Valencia (ADEIT), the quality issue can be approached from different perspectives: standardization, benchmarking,1 certification, and peer review (ADEIT, 2002). This chapter analyses the benchmarking issue presenting different reference frameworks used to compare courses. In the USA and Canada, the following reference frameworks have been identified: e-Learning Courseware Certification (ECC) by American Society for Training & Development (ASTD), Quality On the Line by IHEP (Institute for Higher Education Policy), Open eQuality Learning Standards by Learning Innovations Forum d’Innovation d’Apprentissage (LIfIA), and by European Institute for e-Learning (EIfEL). The ECC is a service for the certification of e-learning courses by ASTD (ASTD, 2006). This certification is based on 19 standards that evaluate interface, compatibility, production quality, and instructional design of e-learning courseware (ASTD, 2007). To speed the process of submitting e-learning courses for review and certification, the ASTD has created the ECC Self-Assessor Tool, which allows the user to prescreen an asynchronous learning course against the 19 standards of the ECC (ASTD, 2006).
Quality on the Line presents criteria for the evaluation of Internet distance-learning education success (Phipps & Merisotis, 2000) that will be analysed later in this chapter. Finally, Open eQuality Learning Standards lists a series of requirements that distance-learning products and services should have in order to be effective (LIfIA & EIfEL, 2004). The standards proposed are based on the Canadian Recommended e-Learning Guidelines copublished by the Canadian Association for Community Education (CACE) and the enterprise FuturEd Inc (Barker, 2002). In Europe, e-learning quality has been a topic for several projects, most of them under the eLearning Initiative by the European Commission such as European Quality Observatory (EQO), Qual-e-Learning, Supporting Excellence in eLearning (SEEL), SEEQUEL, European Foundation for Quality in e-Learning (EFQUEL), Quality, Interoperability and Standards in E-learning (QUIS), Self Evaluation for Quality in e-Learning (SEVAQ), European Quality in Individualised Pathways in Education (EQUIPE), e-Learning Project Example (ELEX), and Virtual European Centre in e-Learning (EQUEL). EQO is a portal for the promotion of e-learning quality (EQO, 2007). This project aims the creation of a single place where people interested in e-learning (designers, apprentices, and managers) will find the best solution for their needs. Based on the results of a study about the use and dissemination of quality approaches in European e-learning (Ehlers, Goertz et al., 2005) and in another additional experience from the EQO project, a set of guidelines have been defined that: should shape the quality of e-learning by 2010: (a) learners must play a key part in determining the quality of e-learning services; (b) Europe must develop a culture of quality in education and training; (c) quality must play a central role in education and training policy; (d) quality must not be the preserve of large organisations; (e) support structures must be established to
E-QUAL
provide competent, service-oriented assistance for organisations’ quality development; (f) open quality standards must be further developed and widely implemented; (g) interdisciplinary quality research must become established in future as an independent academic discipline; (h) research and practice must develop new methods of interchange; (i) quality development must be designed jointly by all those involved; (j) appropriate business models must be developed for services in the field of quality. (Ehlers, Goertz et al., 2005, p. 11) Qual-e-Learning is a research project that seeks to identify the “good practices” based on the study of different e-learning courses (Qual e-Learning Project Consortium, 2003). Two of the most important results of this project are a Handbook of Best Practices for the Evaluation of e-Learning Effectiveness (Qual e-Learning Project Consortium, 2004) and an evaluation tool for the evaluation of training effectiveness and impact measures (Qual e-Learning Project Consortium, 2007). SEEL is a project that focuses on the study of e-learning quality and its impact on local and regional development (EIfEL, 2007). It also aims at defining policies for the creation of quality e-learning courses. Through this project were developed guidelines based on a series of indicators that should be used both as success indicators and quality assurance measures for new learning region initiatives (SEEL, 2004). SEEQUEL was designed to create a European forum on evaluation of e-learning quality in order to identify good practices and define guidelines (MENON Network EEIG, 2004). This project created a model for the analysis of e-learning courses that will be analysed later in this chapter. The purpose of EFQUEL: is to involve actors in a European community of users and experts to share experiences on how e-Learning can be used to strengthen individual, organisational, local and regional development,
digital and learning literacy, and promote social cohesion (EFQUEL, 2006). One of the projects of EFQUEL is Triangle—a project based on the work of SEEL, EQO and SEEQUEL that seeks: to promote European diversity of quality approaches and services in the field of learning, education and training, to connect results and concepts on European e-Learning quality developed in three successful projects, to broaden the discussion and discourse on e-Learning quality, and to provide a sustainable infrastructure as a single entry point for e-Learning quality. (EFQUEL, 2006) The QUIS project is a transversal project in EU e-Learning Programme. Its activities are all directed towards quality in e-learning, interoperability, and reusability of e-learning materials and development of standards (TISIP, 2007). Among the products developed by this project are several reports and disseminations. SEVAQ operates within the framework of the Leonardo Da Vinci Programme. The project aims at improving the quality of the vocational and educational courses offered via open and distance learning, e-learning and blended learning, providing a number of good practices concerning quality and a multifunctional self-evaluation questionnaire in order to obtain valuable customer feedback (SEVAQ, 2005). The EQUIPE project aims to increase confidence and encourage innovative educational practices in lifelong learning in universities by developing, testing, and promoting quality assurance and enhancement tools (EQUIPE, 2007). This project operates under the Socrates Programme and is coordinated by the University of Porto (Portugal), European Universities Continuing Education Network (EUCEN), University of Genève (Switzerland), University of Limerick (Ireland), University of Valencia (Spain), Uni-
E-QUAL
versity of Turku (Filand), University of Bergen (Norway), University of Liverpool (UK), University of Genova (Italy) and Kaunas University of Technology (Lithuania). ELEX main aim was to exploit the practice potential of the communities within the European Vocational Training Association (EVTA) using ICT tools to support team work and by trying to maximize the dissemination and re-use of selected best practices of e-learning and ICT use in the field of vocational training (ELEX, 2005). EQUEL is coordinated by Lancaster University (UK) and involves key researchers and e-learning practitioners from 14 European Higher Education institutions. EQUEL stands for “e-quality” in e-learning and is a virtual centre of excellence for innovation and research in networked learning in higher and post-compulsory education (EQUEL, 2004). Under the framework of Socrates Programme—Minerva Action, the Meca-ODL project was held by a partnership led by the ADEIT. This project produced a methodological guide in order to analyse Internet learning quality developing a support computer application for the purpose (ADEIT, 2002). In England the Institute for IT training (IITT) developed several best practice documents stating that quality of e-learning services should be assured and strengthened such as Code of Practice for e-Learning Providers, Web Site Usability Standards, Competence Framework for e-Learning Designers and Developers, Competence Framework for e-Tutors, and a Charter for Learners to inform learners of what to require from an e-learning course (IITT, 2005). The British Open and Distance Learning Quality Council (ODLQC) developed the Open and Distance Learning Quality Council Standards (ODLQC, 2006). The Quality Assurance Agency for Higher Education (QAA) also developed quality guidelines for quality assurance in distance learning (QAA, 2004). Finally, the Quality E-Learning Assurance Services Ltd. (eQCheck), a British
firm, offers assessment and certification of eLearning products and services (eQCheck, 2006). The quality assessment bases upon the Canadian Recommended e-Learning Guidelines. In France the Association Française de Normalisation (AFNOR), in cooperation with the Forum Français pour la Formation Ouverte et à Distance (FFFOD), developed the French Code of Practice in e-Learning (AFNOR, 2004). In Germany, the Deutsches Institut für Normung e. V. (DIN), which represents the country interests in international standardisation activities, is responsible for the creation of the DIN PAS 1032-1 Reference Model for Quality Management and Quality Assurance (DIN, 2004). In Norway, the Norsk Forbund for Fjernundervisning og Fleksibel Utdanning (NFF), also known as Norwegian Association for Distance and Flexible Education (NADE) created the Guidelines for Quality Standards (NADE, 2001). In Portugal, the Portuguese Society for Innovation (SPI) took charge of the InnoeLearning project, which has been funded by the European Union Programme Information Society Technologies. This project consisted on the evaluation of e-learning sites based on quality standards criteria (SPI, 2003). Among the international organizations that have been promoting the development of quality management approaches, is the European Foundation for Quality Management (EFQM), the International Organisation for Standardisation (ISO), the European Foundation for Management Development (EFMD), the International Network for Quality Assurance Agencies in Higher Education (INQAAHE), the European Association for Distance Learning (EADL), the European Committee for Standardization (CEN), and the European Centre for the Development of Vocational Training (CEDEFOP). We can also find a huge variety of recommendations (e.g., AFT, 2000; Hollands, 2000), guidelines (e.g., EADL, 2003), criteria catalogues (e.g., Wright, 2003), checklists (e.g., Bellinger,
E-QUAL
2004; Scalan, 2003) and quality approaches for quality certification and accreditation (e.g., DECT, 2007; eduQua; 2005; EFMD, 2007). Current developments in standardisation provide an International Standard for harmonizing the various approaches used around the world for assessing the quality of e-learning initiatives. This common approach has been developed by the ISO/IEC Joint Technical Committee JTC 1, Information Technology, Subcommittee SC 36, Information Technology for Learning, Education and Training. This standard, called ISO/IEC 19796, comprises four parts: Part 1- General approach; Part 2: Quality model; Part 3: Reference methods and metrics; and Part 4: Best practice and implementation guide. The first part is already complete and next parts will be complete up to 2007 (ISO, 2005). The ISO/IEC 19796 – Part 1: General approach, “provides an overall framework which can be used for introducing quality approaches in all provider and user organizations of e-Learning” (ISO, 2005).
modeLs For the evALuAtIon oF e-LeArnIng QuALIty This section describes some of the main specific evaluation models of process-oriented e-learning courses, namely SEEQUEL, Open eQuality Learning Standards, Meca-ODL, Quality On the Line, and InnoeLearning.
Sustainable Environment for the Evaluation of Quality in e-Learning (seeQueL) The SEEQUEL project integrated in the European Commission e-Learning Initiative (MENON Network EEIG, 2004) was conducted by a group of enterprises and institutions called MENON Network. This project created a model for the analysis of e-learning courses based on the idea that learning experience depends on the inherent
quality of three factors: the learning sources and resources committed, the process designed and implemented to generate learning results, and the coherence and meaningfulness of the experience with the context in which the learner is working and living. In each of these factors different topics are analyzed (MENON Network EEIG, 2004). The learning sources factor analyses the supporting staff, teaching staff, learning materials, and learning infrastructures (MENON Network EEIG, 2004). The topics for core learning processes are guidance/training needs analysis, recruitment, learning design, learning delivery, course evaluation, and learners’ assessment (MENON Network EEIG, 2004). Finally, the learning context section analyses the institutional setting, the cultural setting (organisational, professional, and general), the learning environment, the legislation, the financial setting, and values systems (MENON Network EEIG, 2004). This project enhances the subjectivity of assessment of e-learning quality resulting from the evaluator’s environment (university, secondary school, and industry), his/her role (teacher, student) and his/her worldview. The model requires that for each topic a great number of aspects have to be classified and it does not give the learning context enough relevance (communication with other students and the staff).
open e-Quality Learning standards The Canadian and American LIfIA and European EIfEL have created a joint committee, which recommended “Open eQuality Learning Standards” for the analysis of e-learning courses (LIfIA & EIfEL, 2004). This evaluation grid serves as a model for everyone who wishes to plan, carry out, assess, and take e-learning courses (LIfIA & EIfEL, 2004).
E-QUAL
The main distinctive features of this guide are: consumer-oriented (developed with particular attention to return on investment in e-learning for learners), consensus-based (developed through consultation with a balance of provider and consumer groups), comprehensive (inclusive of all elements of the learning system: outcomes and outputs, processes and practices, inputs and resources), futuristic (describing a preferred future rather than the present circumstances for design and delivery), adaptable (with modifications, appropriate to all levels of learning services), and flexible (not all guidelines will apply in all circumstances) (LIfIA & EIfEL, 2004). This guide includes three areas: quality outcomes, quality processes and practices, quality inputs, and resources for e-learning products and services (LIfIA & EIfEL, 2004). Within the outcomes section skills and knowledge, learning skills, and course credits are analyzed (LIfIA & EIfEL, 2004). The quality processes and practices section includes student management, the delivery and management of learning, duly used technologies (computers and other ICT), communications facilities, and the digital archive and e-portfolio service/system (LIfIA & EIfEL, 2004). As to the quality inputs and resources for eLearning products and services the evaluation includes: intended learning outcomes, curriculum content, teaching/learning materials, product/ service information for potential students, learning technologies and materials, appropriate and necessary staff, the comprehensive course package (all materials and technologies), evidence of program success, program plans and budget and advertising, recruiting and admissions information (LIfIA & EIfEL, 2004). The model under analysis enhances e-learning results which requires a previous deep knowledge of the courses and also does not take enough account of learning context.
methodological guide for the Analysis of Quality in open and distance Learning (meca-odL) This reference framework proposal derived from a European Socrates-Minerva project. The project called Meca-ODL has been conducted by a partnership (English, Spanish, and German universities and Italian and Greek training organizations) led by ADEIT (ADEIT, 2002). The reference framework stands out because it involves the entire process from conception to evaluation. The seven stages and respective topics for a distance-learning course included in this evaluation grid are: conception, analysis, design, content, production, delivery and evaluation (ADEIT, 2002). This project has also developed an online evaluation tool with the methodology already described. With this tool, the evaluator’s profile (developer/user/reseller) has to be given in order to select between 140 criteria available. Items have a 1-5 weight and their evaluation can also range from 1 to 5 (ADEIT, 2002). This evaluation tool is quite complete and attempts to ensure quality in every stage of course development (from design to evaluation). This evaluation, therefore, requires complete knowledge of courses under analysis, which is not the case when the evaluator’ analysis is based upon public information.
Quality on the Line The grid “Quality On the Line” has been developed by Phipps and Merisotis (2000) and published by the IHEP. It is divided into 7 areas comprising 45 topics. The benchmarks in the category Institutional Support include those activities by the institution that help to ensure an environment conducive to maintaining quality distance education, as well as policies that encourage the development of Internet-based teaching and learning. These
E-QUAL
benchmarks address technological infrastructure issues, a technology plan, and professional incentives for faculty (Phipps & Merisotis, 2000). The category course development includes benchmarks for the development of courseware, which is produced largely either by faculty members on campus, subject experts in organizations, and/or commercial enterprises (Phipps & Merisotis, 2000). The category teaching/learning process addresses the array of activities related to pedagogy, the art of teaching. Included in this category are process benchmarks involving interactivity, collaboration, and modular learning (Phipps & Merisotis, 2000). The benchmarks in the category course structure address those policies and procedures that support and relate to the teaching/learning process. They include course objectives, availability of library resources, types of materials provided to students, response time to students, and student expectations (Phipps & Merisotis, 2000). The category student support includes the array of student services normally found on a college campus including admissions, financial aid, and so forth, as well as student training and assistance while using the Internet (Phipps & Merisotis, 2000). The benchmarks of category faculty support address activities that assist faculty in teaching online, including policies for faculty transition help as well as continuing assistance throughout the teaching period (Phipps & Merisotis, 2000). The benchmarks in the category evaluation and assessment relate to policies and procedures that address how, or if, the institution evaluates Internet-based distance learning. They include outcomes assessment and data collection (Phipps & Merisotis, 2000). Like Meca-ODL, this model presents a great deal of topics aiming at making a complete analysis of all course details. This characteristic adjusts to a possible quality certification but hampers its use by the learner applying to courses.
InnoeLearning InnoeLearning derived from the identification of good practices in e-learning in three specific areas as identified by the European Commission: learning models, interpersonal skills, and informal learning, and learning communities (SPI, 2003). In the learning models area the topics to be considered are: previous skills analysis, tutor availability, trainee flexibility, information display control, goal definition, self-evaluation, content integration, document management, and help easiness (SPI, 2003). Interpersonal skills and informal learning, on their turn, deal with the activities, recognising, transmissibility, environment, communicational interactivity, and motivation, and should be trainee-centred (SPI, 2003). Learning communities are characterised by clearly defined goals (community thematic): experienced moderators, registration system, synchronous tools, asynchronous tools, group conflict solving, calendar, and interaction level (SPI, 2003). The present model is particularly oriented towards learning communities disregarding the processes and contents, which may as well hamper the success of the course.
e-QuAL modeL Unlike Meca-ODL or Quality on the Line, this new model, e-Qual, aims at analysing learning contents, learning contexts, processes, and results in a more balanced way without putting so much emphasis on quality assurance of the whole development and implementation process of e-learning courses. The e-Qual model derives from the analysis of reference frameworks presented through projects such as Open eQuality Learning Standards (LIfIA & EIfEL, 2004), SEEQUEL (MENON Network
E-QUAL
EEIG, 2004), InnoeLearning (SPI, 2003), MecaODL (ADEIT, 2002), Quality On the Line (Phipps & Merisotis, 2000) and the several lists of good practices and is based on structural simplicity and flexibility for the analysis of different pedagogical models. Table 1 compares e-Qual with previously analysed models. The structure of our model includes the four areas previously mentioned: learning contents, learning contexts, processes, and results. The first two areas relate to the necessary resources for the implementation of an e-learning course (learning contents, learning contexts). The third area deals with the processes ensured by the staff (administrative, technical and pedagogical). The last area relates to the outcomes, in particular to the learners’ satisfaction, which is the main aspect to be considered as far as quality is concerned. Within the four areas 16 items have been identified, which are to be classified according to a 0-10 scale. This scale allows the evaluator greater detail in the classification of each item than the 1-5 scale used, for instance, in Meca-ODL. This requires a greater accuracy in evaluation but also helps distinguish courses under analysis using a relatively small number of items. The flexibility of the classification chart lies in the assignment of a weight to each item, which varies from 0 to 3 according to its importance to the evaluator. Weight 0 indicates that the item does
not apply to course analysis. The three remaining values (1, 2, 3) allow to distinguish between the significance of the items used. Course analysis can be done on a global basis or by area. The learning contents area comprises four items: written contents, multimedia contents, complementary bibliographical sources, and content management system. In this area all the content-related aspects are analysed. The first two items refer to the materials made available while the third item addresses the content markers used to deepen topics. The last item, in turn, deals with the way contents are acceded. In the second area, the learning contexts, a fundamental infrastructure for learning performance, is evaluated in three items: common space of the learning community, asynchronous communication tools, and synchronous communication tools. Aspects included in these items are determinant in the implementation of a real learning community which characterises quality e-learning courses. The process area focuses on the staff fostering learning processes. The five items to be classified are: administrative management (administrative staff), technical management (technical staff), management of contents transfer (trainers/tutors), management of learning communities (trainers/ moderators), and learners evaluation.
Table 1. Comparison between quality evaluation models analysed
e-Qual
SEEQUEL
Open eQuality Learning Standards
Learning Contents
X
X
X
Learning Contexts
X
Processes
X
Results
X
X
MecaODL
Quality On the Line
X
X
X
X
X
X
X
X
X
X
X
X
Inno-eLearning
E-QUAL
E-learning added two different actors and their performance affected learning quality. The significance of tutors and/or moderators depends on the pedagogical model adopted. In the case of individual-centred contents tutors acquire a more important role. As for the learning community, on the other hand, moderators become the most important actors. In the results area, the last to be analysed, four items were identified: knowledge and skills acquired, training recognition, learners satisfaction (clients), and business. Knowledge and acquired skills are the most quantifiable and measurable result providing an objective idea of training quality. Training recognition is an important aspect for the client and must be regarded as so in this evaluation. Learner’s satisfaction is no doubt the most sensitive item in quality terms because true satisfaction is only achieved when it is not perceived by the client. The last item relates to the sustainability of the e-learning activity and reveals determinant since a “non-profitable” business ends up by failing, no matter the quality achieved.
The following tables present the e-Qual model with relevant items and the characteristics to be considered within each of them. The aspects to be considered in items classification are as follows: the evaluator’s sensitivity to any of the topics mentioned, the evaluator’s perspective (learners, producers and distributors), and the course objectives determine its significance within each item. In order to test the e-Qual model, several e-learning courses were evaluated. In the last evaluation, the e-Qual model has been applied to evaluate three Microsoft Excel courses promoted by accredited international organizations. In this study the choice of courses derived from no pre-defined criteria but the possibility of “attending” them. The weight given to each item is the same (1) and the results show a balance between several courses, though course C stands out slightly (Table 6). The e-Qual model allows comparison between courses either on a whole basis or by area: learning contents, learning contexts, processes, and results. The model however allows privileging of certain aspects of the courses, through specific
Table 2. e-Qual model, learning contents 1. LEARNING CONTENTS
1.1 Written Contents
Credibility (recognised authors in the field), updating, respect for author rights, adjustment to trainee cultural background and his learning needs, introduction of real experiences, modularity and reusability.2
1.2 Multimedia Contents
Credibility, updating, respect for author rights, technical and aesthetic quality, compliance with standards and usage guides, adequacy to technical specifications (e.g., bandwidth3) of trainees, interactivity, adjustment of cultural background of trainee and training needs, incorporation of real experiences, modularity and reusability.
1.3 Complementary Bibliographical Sources
Web addresses, virtual libraries, and books.
1.4 Content Management System (CMS)
Structuring coherence according to a learning theory, autonomy of browsers and plug-ins, system personalization, facility of content location/visualization/download and electronic safety to ensure integrity and validity of contents provided.
E-QUAL
Table 3. e-Qual model, learning contexts 2. LEARNING CONTEXTS 2.1 Common Space of the Learning Community
Relevant administrative and pedagogic information, trainee introduction, activity schedule and published news
2.2 Asynchronous Communication Tools
E-mail, mailing lists, newsgroups, forums, blogs, and wikis are evaluated as to their usage facility and access speed.
2.3 Synchronous Communication Tools
Chat, document transfer, whiteboard, document share, audioconference and videoconference are evaluated as to their usage facility and access speed, possession by trainees of appropriate technical conditions (e.g., bandwidth) and necessary equipments (e.g., microphone).
Table 4. e-Qual model, processes 3. PROCESSES
3.1 Administrative Management
Course dissemination/advertising, presentation of duration, goals and pedagogic methods, guarantee of the fulfilment of pre-requirements for formal and informal learning, the existence of a reasonable trainee/staff ratio, enrollment procedures (online or in presence), different payment processes (bank transfer, credit card, check, money, etc.) and administrative support through different means (e-mail, chat, and phone).
3.2 Technical Management
Usage of CMS and LMS through different means (learning sessions, e-mail, chat, phone, FAQ’s-Frequently Asked Questions) and glossary is relevant.
3.3 Management of Content Transfer
Trainer/tutor availability through the tracking of synchronous and asynchronous communications between trainees and tutors as well as effective access to the contents made available, which is measured through the number of downloads and accesses.
3.4 Management of Learning Communities
Trainer intervention to dynamise and guide communities for problem solving and effective use of synchronous and asynchronous communication tools between trainees and between the trainers and the tutors.
3.5 Learning Evaluation
Quantitative (self-assessment tests, written exams and simulations) or qualitative (learning contracts, presentations, written assignments, projects, portfolios and peer evaluation).
Table 5. e-Qual model, results 4. RESULTS
0
4.1 Knowledge and Skills Acquired
Consistent with the established syllabus, relevant for the educational background and the profession, and improve the trainee’s skills.
4.2 Training Recognition
Certification/accreditation by professional organizations, the credits recognised by teaching institutions leading to academic degrees, recognising of similar courses taught under the presential system and national and international recognising of training performed.
4.3 Learners Satisfaction
Learning goals achieved (efficacy), the money, time and effort spent (effectiveness), the contents made available, and the contexts created
4.4 Business
ROI – Return on Investment.
E-QUAL
Table 6. Application of the e-Qual model Course A
Course B
Course C
Mark
Mark
Mark
Weight 1. LEARNING CONTENTS 1.1 Written Contents
1
7
6
8
1.2 Multimedia Contents
1
6
5
7
1.3 Complementary Bibliographical Sources
1
6
6
7
1.4 Content Management System (CMS)
1
7
7
7
2.1 Common Space of the Learning Community
1
7
8
7
2.2 Asynchronous Communication Tools
1
7
8
6
2.3 Synchronous Communication Tools
1
7
8
6
3.1 Administrative Management
1
8
8
8
3.2 Technical Management
1
7
7
6
3.3 Management of Content Transfer
1
7
6
8
3.4 Management of Learning Communities
1
7
8
6
3.5 Learning Evaluation
1
7
6
8
4.1 Knowledge and Skills Acquired
1
7
7
8
4.2 Training Recognition
1
8
6
6
4.3 Learners Satisfaction
1
7
7
8
4.4 Business
1
2. LEARNING CONTEXTS
3. PROCESSES
4. RESULTS
Global Average
weights assigned to their correspondent items. An evaluator who is more concerned with the contents (contents perspective) will attribute a higher weight (3) to items such as written contents (1.1), multimedia contents (1.2), complementary bibliographical sources (1.3), content management systems (1.4), and management of contents transfer (3.3). On the other hand, an evaluator that is contextcentered (contexts perspective) will attribute a higher weight (3) to such items as common space of the learning community (2.1), asynchronous communication tools (2.2), synchronous communication tools (2.3), and management of learning communities (3.4).
7
7
7
7.0
6.9
7.1
Figure 1 presents for three courses under analysis the initial results (equal weight) together with the results obtained with two different weight assignment (contents perspective and context perspective). An evaluator who is looking for quality contents will choose course C (7.2). An evaluator who is looking for experiences and exchange of information will choose course B (7.3).
concLusIon Quality approaches have different perspectives and interpretations depending on its methodology and implementation (they differ as to the goal:
E-QUAL
Figure 1. Application of e-Qual model with different perspectives (context perspective, contents perspective and equal weight)
quality policies, quality management, quality assessment, and so forth; as to the target group: trainees, designers, decision-makers; and as to the method: process, product, skill, guidance). Therefore, any quality approach must be open to different values, goals and interests. From the models analysed it can be concluded that some are unbalanced putting too much focus on one of the e-learning components such as SEEQUEL, Open eQuality, and Innolearning. Meca-ODL and Quality On the Line concern with all course aspects from its design to evaluation, therefore requiring a deep knowledge of the whole development process. This deep analysis is not feasible when an applicant wishes to choose the best course based on information made available to the public. Therefore, the e-Qual model appears as an alternative to the models proposed to analyse the different areas judged to be relevant: learning contents, learning contexts, processes, and results. This model prevents exhaustive analyses and also allows the evaluator to distinguish between items through the weights assigned.
The e-Qual model has been applied to analyse several distance-learning courses. A relevant aspect of this application was the model’s flexibility in adapting to the evaluator’s perspective (learners, producers, and distributors) and to the contents and contexts perspective. This difference of perspective reveals in the weight attributed to the different items under analysis. In fact, the results obtained by the courses were very different when the evaluator’s perspective was changed. Another characteristic of this model is the possibility of being applied with some lack of information by assigning null weight to those items. In future, the distinctive characteristics in each item must be clarified in order to provide a more detailed referential, thus reducing the evaluator’s subjectivity. The evaluator will thus have a guide, which will help him with the classification of different items. With the e-Qual model we hope to give a contribution for the improvement of e-learning quality and believe that this is the only way it can develop and grow in a sustainable manner.
E-QUAL
Future trends The future of e-learning relies upon learnercentered learning in different environments and contexts with a focus on informal learning. New information and communication technologies (synchronous and asynchronous) are essential to this kind of learning, but pedagogic issues are also determining for its success. Learners are more independent but also more responsible being capable of producing contents. In future, contents should also be taken into account since improvement of their quality, mainly multimedia contents, requires huge investment. Considering its reuse possibility in different contexts, learning objects are particularly suitable to solve this problem. Apart from significantly reducing development costs and promoting personalized teaching, learning objects may contribute to the consolidation of e-learning by improving instruction quality without the need for teachers to become technology experts. The use of intelligent learning systems based on Artificial Intelligence to provide the learners with guidance on their learning process is an aspect to be considered. As technology develops and becomes more accessible games and complex simulations will be able to be used in the learning process. In an Era of constant change, Rapid E-Learning is another subject to keep an eye on because it reduces the lapse of time between creation and content availability. Blended learning, or mixed learning, and e-learning embedded in the learner’s workflow are other contemporary trends. In what concerns quality, and following the trend of e-learning itself, there is a current trend that claims a leading role for the student in the characterization of learning systems that may contribute for his/her success. This quality approach has deep consequences since evaluation must be done with basis on student’s motivation and context, not on external, universal, objective criteria.
Referring to relevant technical standards is another approach for analysing e-learning quality: ISO/IEC 19796 Standard on Information Technology—Learning, education, and training—quality management, assurance, and metrics. The quality of learning objects, in turn, should comply with relevant industry specifications such as Sharable Content Object Reference Model (SCORM), Learning Object Metadata (LOM) or AICC-CMI Guidelines for Interoperability. A clear tendency for an integration and harmonization of both quality approaches (process and product) also seems to exist.
note All trademarks and registered trademarks are the property of their respective owners. FuturEd, Portuguese Society for Innovation, Quality ELearning Assurance Services Ltd. (eQCheck) are company names and property of their respective owners. Other company, product, brand and service names may be trademarks or service marks of others.
reFerences ADEIT. (2002). Meca-ODL -methodological guide for the analysis of quality in open and distance learning delivered via Internet. Project Socrates-Minerva, European Commission. AFNOR. (2004). Code of practice: Information technologies—E-learning guidelines. Retrieved October 30, 2007, from http://www.fffod.org/fr/ doc/RBPZ76001-EN.doc AFT. (2000). Distance education: Guidelines for good practice. American Federation of Teachers. Retrieved October 30, 2007, from http://www.aft. org/higher_ed/pubs-reports/reportslist.htm ASTD. (2006). E-learning courseware certification. American Society for Training & De-
E-QUAL
velopment. Retrieved October 30, 2007, from http://www.astd.org/astd/marketplace/ecc ASTD. (2007). The ASTD institute e-learning courseware certification (ECC) standards. American Society for Training & Development. Retrieved October 30, 2007, from http://workf low.ecc-astdinstitute.org/index. cfm?sc=help&screen_name=cert_view ASTD & NGA. (2001). A vision of e-learning for America’s workforce. American Society for Training & Development. Retrieved October 30, 2007, from http://www.astd.org/NR/rdonlyres/ 8C76F61D-15FD-4C57-8554-D7E940A59009/0/ pp_ jh_ver.pdf Barker, K. (2002). Canadian recommended elearning guidelines (CanREGs). FuturEd and Canadian Association for Community Education. Bellinger, A. (2004). Good course, bad course. Retrieved October 30, 2007, from http:// www.trainingfoundation.com/articles/default. asp?PageID=1844 Berlecon Research. (2001). Wachstumsmarkt e-learning: Anforderungen, akteure und perspektiven im Deutschen markt. Berlin: Berlecon Research. Retrieved October 30, 2007, from http://www.berlecon.de/studien/e-Learning/index.html Connolly, M., Jones, N., & O’Shea, J. (2005). Quality assurance and e-learning: Reflections from the front line. Quality in Higher Education, 11(1), 59-67. Dahlgaard, J., Kristensen, K., & Khanji, G. K. (2005). Fundamentals of total quality management. London: Routledge. DECT. (2007). Distance education and training council. Retrieved October 30, 2007, from http://www.dect.org DIN. (2004). PAS 1032-1, Deutsches institut für normung e. V. Retrieved October 30, 2007, from http://www.din.de
Directorate-General for Education and Culture, European Commission. (2004). Final Report. Study of the e-learning suppliers’ market in Europe. Danish Technological Institute, Massy, J., Alphametrics Ltd, & Heriot-Watt University. EADL. (2003). Quality guide. Quality guidelines to improve the quality of distance learning institutes in Europe (2nd ed.). European Association Distance Learning. eduQua. (2005). Schweizerisches Qualitätszertifikat für Weiterbildungsinstitutionen. Retrieved October 30, 2007, from http://www.eduqua.ch EFMD. (2007). European quality improvement system. European Foundation for Management Development. Retrieved October 30, 2007, from http://www.efmd.be/equis EFQUEL. (2006). European foundation for quality in e-learning. Retrieved October 30, 2007, from http://www.qualityfoundation.org Ehlers, U. -D. (2004). Quality in e-learning. The learner as a key quality assurance category. European Journal Vocational Training, 29, 3-15. Ehlers, U. -D. (2007). Towards greater quality literacy in a e-learning Europe. e-Learning Papers, 2. Retrieved October 30, 2007, from http://www. e-Learningpapers.eu/index.php?page=doc&doc_ id=8549&doclng=6 Ehlers, U. -D., Goertz, L., Hildebrandt, B., & Pawlowski, J. (2005). Quality in e-learning. Use and dissemination of quality approaches in European e-learning. A study by the European quality observatory (CEDEFOP Panorama Series, 116). Luxembourg: Office for Official Publications of the European Communities. Ehlers, U. -D., Hildebrand, B., Tescheler, S., & Pawlowski, J. (2004). Designing tools and frameworks for tomorrows quality development. In Quality in European E-Learning Designing Tools and Frameworks for Tomorrows Quality Development, Workshop European Quality Observatory
E-QUAL
(EQO) co-located to the 4th IEEE International Conference on Advanced Learning Technologies. Joensuu: European Quality Observatory. EIfEL. (2007). About SEEL, Supporting excellence in e-learning. European Institute for e-Learning. Retrieved October 30, 2007, from http://www.eife-l.org/activities/projects/seel ELEX. (20 05). E X EM PLO e - le a r ning project. Retrieved October 30, 2007, from http://217.222.182.72/html/iess/pt.htm eQCheck. (2006). Qualite-learning assurance services ltd. Retrieved October 30, 2007, from http://www.eqcheck.co.uk EQO. (2007). European quality observatory. Retrieved October 30, 2007, from http://www. eqo.info EQUEL. (2004). Virtual European centre in e-learning. Retrieved October 30, 2007, from http://equel.net EQUIPE. (2007). Aims & objectives. European Quality in Individualised Pathways in Education. Retrieved October 30, 2007, from http://www. EQUIPE.up.pt/equipe/objectives.htm European Commission. (2003). E-learning programme, Education. Programmes, Europe. Retrieved October 30, 2007, from http://ec.europa. eu/education/programmes/e-Learning/programme_en.html# European Commission. (2005). E-learning, designing education tomorrow. Report on the consultation workshop. The “e” for our universities—Virtual campus. Organisational changes and economics models. DRAFT. Brussels: European Commission, Directorate-General for Education and Culture. Figueiredo, A. D. (2002). Redes e Educação: A Surpreendente Riqueza de um Conceito. In Conselho Nacional de Educação (Ed.), Redes de Aprendizagem, Redes de Conhecimento. Lisboa: Conselho Nacional de Educação.
Hollands, N. (2000). Online testing: Best practices from the field. Retrieved October 30, 2007, from http://198.85.71.76/english/blackboard/testingadvice.html Husson, A. (2004). Comparing quality models adequacy to the needs of clients in e-learning. In Quality in European E-Learning Designing Tools and Frameworks for Tomorrows Quality Development, Workshop European Quality Observatory (EQO) co-located to the 4th IEEE International Conference on Advanced Learning Technologies. Joensuu: European Quality Observatory. IITT. (2005). Institute of IT training. Retrieved October 30, 2007, from http://www.iitt.org.uk ISO. (2003). ISO standards compendium ISO 9000 – quality management (10th ed.). Geneva: International Organization for Standardization. ISO. (2005). ISO/IEC standard benchmarks quality of e-learning. International Standard Organization. Retrieved October 30, 2007, from http://www.iso.org/iso/en/commcentre/pressreleases/2006/ref992.html Juran, J. M. (1988). Quality control handbook. New York: McGraw Hill. LIfIA & EIfEL. (2004). Open e-quality learning standards. Joint e-quality committee of LIfIA (Learning Innovations Forum d’Innovation d’Apprentissage) and EIfEL (European Institute for e-Learning). Marques, C. (2004). E-learning: Uma nova forma de aprender. Revista e-Ciência, 1(1), 23. MENON Network EEIG. (2004). Sustainable environment for the evaluation of quality in elearning. SEEQUEL core quality framework. E-Learning Initiative, European Commission. NADE. (2001). Quality standards for distance education. Norwegian Association for Distance and Flexible Education. Retrieved October 30, 2007, from http://www.nade-nff.no/nff2/filer/ Kvalitet/Kvalitetsnormer%20for%20fjernunde rvisning.pdf
E-QUAL
ODLQC. (2006). Open and distance learning quality council standards. Open and Distance Learning Quality Council. Retrieved October 30, 2007, from http://www.odlqc.org.uk/standard. doc Phipps, R., & Merisotis, J. (2000). Quality on the line: Benchmarks for success in Internet-based distance education. Washington: IHEP - Institute for Higher Education Policy. QAA. (2004). Code of practice for the assurance of academic quality and standards in higher education (2nd ed.). Mansfield: Quality Assurance Agency for Higher Education. Qual E-Learning Project Consortium. (2003). Qual-e-learning project. Qual E-Learning Project Consortium. Retrieved October 30, 2007, from http://www.qual-e-Learning.net/cgi/index. php?wpage=overview Qual E-Learning Project Consortium. (2004). Handbook of best practices for the evaluation of e-learning effectiveness. Qual e-Learning Project Consortium.
SEEL. (2004). Quality guidelines for learning strategy and innovation, version 3. Supporting Excellence in E-Learning. Seufert, S., & Euler, D. (2005). Nachhaltigkeit von e-Learning-Innovationen: Fallstudien zu Implementierungsstrategien von e-Learning als Innovationen an Hochschulen. St. Gallen: Swiss Centre for Innovations in Learning. Retrieved October 30, 2007, from http://www.scil.ch/publications/docs/2005-01-seufert-euler-nachhaltigkeit-e-Learning.pdf SEVAQ. (2005). Self evaluation for quality in e-learning. Retrieved October 30, 2007, from http://www.sevaq.com SPI. (2003). Empre-learning: Promoção de Estruturas de e-Learning Inovadores, em Língua Portuguesa, que Permitam o Aumento de Competências e Aumentem a Empregabilidade. Porto: Sociedade Portuguesa de Inovação. TISIP. (2007). QUIS—Quality, interoperability and standards in e-learning. Trondheim: TISIP Research Foundation.
Qual E-Learning Project Consortium. (2007) Qual e-learning evaluation tool. Qual-e-Learning Project. Retrieved October 30, 2007, from http://www.qual-e-Learning.net/cgi/index.php
Ullmo, P. -A., & Ehlers, U. -D. (2007). Quality in e-learning. E-Learning Papers, 2. Retrieved October 30, 2007, from http://www.e-Learningpapers.eu/index.php?page=volume
Rekkedal, T. (2006). Distance learning and elearning quality for SMEs—State of the art. Paper prepared for the EU Leonardo Project, E-learning Quality for SMEs: Guidance and Counselling.
Wright, C. R. (2003). Criteria for evaluating the quality of online courses. Retrieved October 30, 2007, from http://www.imd.macewan.ca/imd/content.php?contentid=36
Ruttenbur, B., Spickler, G., & Lurie, S. (2000). Elearning—The engine of the knowledge economy. New York: Morgan Keegan & Co.
AddItIonAL reAdIng
Scalan, C. L. (2003). Reliability and validity of a student scale for assessing the quality of Internet-based distance learning. Distance Learning Administration, VII (III). Retrieved October 30, 2007, from http://www.westga.edu/~distance/ ojdla/fall63/scanlan63.html
ACE. (2001). Distance learning evaluation guide. Washington, DC: American Council on Education. Artur, A. C., Coll, J., Fagerberg, T., Leeuwen, R. J. M. V., Liikane, K., Liivrand, A. et al. (2006). State of the art report on e-learning quality for
E-QUAL
SMEs: An analysis of e-learning experiences in European small and medium sized enterprises. Bekkestua, Norway: ELQ-SME Project 2006. Barbera, E. (2004). Quality in virtual education environments. British Journal of Educational Technology, 35(1), 13-20. Barron, T. (2003). LoD survey: Quality and effectiveness. Learning Circuits. Retrieved October 30, 2007, from http://www.learningcircuits. org/2003/may2003/qualitysurvey.htm Belanger, F., & Jordan, D. (2000). Evaluation and implementation of distance learning: Technologies, tools and techniques. Hershey, PA: Idea Group Publishing. Bertzeletou, T. (2002). Presentation of the work of the European forum on quality in VET: Main outcomes and list of possible quality dimensions, criteria and indicators on quality management approaches (QMA), self assessment, examination and certification arrangements, quality indicators. Retrieved October 30, 2007, from http://www2.trainingvillage.gr/etv/quality/Summary_of_the_main_results.doc Booth, A., Levy, P., Bath, P. A., Lacey, T., Sanderson, M., & Diercks-O’Brien, G. (2005). Studying health information from a distance: Refining an e-learning case study in the crucible of student evaluation. Health Information and Libraries Journal, 22(Suppl. 2), 8-19. Bourne, J., & Moore, J. C. (2004). Elements of quality online education into the mainstream. Needham, MA: Sloan-C. Brittain, S., & Liber, O. (2004). A framework for the pedagogical evaluation of e-learning environments. Retrieved October 30, 2007, from http://www.cetis.ac.uk/members/pedagogy/files/ 4thMeet_framework/VLEfullReport Carr-Chellman, A., & Duschatel, P. (2000). The ideal online course. British Journal of Educational Technology, 31(3), 229-241.
Chao, T., Saj, T., & Tessier, F. (2006). Establishing a quality review for online courses. Educause Quarterly, 29(3). Retrieved October 30, 2007, from http://www.educause.edu/apps/eq/eqm06/ eqm0635.asp?bhcp=1 Concannon, F., Flynn, A., & Campbell, M. (2005). What campus-based students think about the quality and benefits of e-learning. British Journal of Educational Technology, 36(3), 501-512. Dam, N. V. (2004). E-quality in e-learning. Chief Learning Officer Magazine. Retrieved October 30, 2007, from http://clomedia.com/content/templates/ clo_article.asp?articleid=507&zoneid=111 Dondi, C., Moretti, M., Husson, A.-M., & Pawlowski, J. M. (2005). Providing good practice for e-learning quality approaches. Interim Report: CWA 1. Project Team Quality Development. CEN/ISSS WSLT. Ehlers, U.-D. (2004). Quality in e-learning from a learner’s perspective. Paper presented at the Third EDEN Research Workshop 2004, Oldenburg, Germany. Ehlers, U.-D. (2005). What do you need for quality in e-learning? e-Learningeuropa.info. Retrieved October 30, 2007, from http://www.e-Learningeuropa.info/directory/index.php?page=doc&doc_ id=6068&doclng=6 Ehlers, U.-D. (2007). The E- empowering learners: Myths and realities in learner-orientated e-learning quality. E-Learning Papers, 2. Retrieved October 30, 2007, from http://www. e-Learningpapers.eu/index.php?page=doc&doc_ id=8550&doclng=6 Ehlers, U.-D., & Goertz, L. (2006). Handbook on quality and standardization in e-learning. BerlinHeidelberg, Germany: Springer Verlag. Ehlers, U.-D., & Pawlowski, J. M. (2004). Elearning-quality: A decision support model for European quality approaches. In G. Fietz, C. Godio & R. Mason (Eds.), E-learning for international
E-QUAL
markets. Development and use of e-learning in Europa. Bielefeld. EVTA. (2005). Selected e-learning in vocational training—Good practices collection. Final report. Brussels: European Vocational Training Association. Giannini-Gachago, D., Lee, M., & Thurab-Nkhosi, D. (2005). Towards development of best practice guidelines for e-learning courses at the University of Botswana. In Proceedings of the IASTED International Conference on Computers and Advanced Technology for Education (CATE 2005), Aruba. Govindasamy, T. (2001). Successful implementation of e-learning: Pedagogical considerations. The Internet and Higher Education, 4(3), 287299. Heldberg, J. G. (2003). Ensuring quality e-learning creating engaging tasks. Educational Media International, 40(3-4), 175-186. Herrington, A., Herrington, J., Oliver, R., Stoney, S., & Willis, J. (2001). Quality guidelines for online courses: The development of an instrument to audit online units. In G. Kennedy, M. Keppell, C. McNaught & T. Petrovic (Eds.), Meeting at the Crossroads: Proceedings of ASCILITE 2001 (pp. 263-270). Melbourne: The University of Melbourne. Hicks, S. (2000, December). Evaluating e-learning. Training and Development, 77-79. Hughes, J., & Attwell, G. (2002). A framework for the evaluation of e-learning. Paper presented at European Seminars—Exploring Models and Partnerships for e-Learning in SMEs, Scotland and Brussels, Belgium. Keppell, C. McNaught & T. Petrovic (Eds.), Meeting at the crossroads: Proceedings of ASCILITE 2001 (pp. 263-270). Melbourne, Australia: The University of Melbourne.
Khan, B. H. (2005). Managing e-learning strategies: Design, delivery, implementation and evaluation. Information Science Publishing. Kidney, G., Cummings, L., & Boehm, A. (2007). Toward a quality assurance approach to e-learning courses. International Journal on E-Learning, 6(1), 17-30. Littlejohn, A. (2005). Key issues in the design and delivery of technology-enhanced learning. In P. Levy & S. Roberts (Eds.), Developing the new learning environment: The changing role of the academic librarian (pp. 70-90). London: Facet. Lockee, B., Moore, M., & Burton, J. (2002). Measuring success: Evaluation strategies for distance education. Educause Quarterly, 1. Retrieved October 30, 2007, from http://www.educause. edu/ir/library/pdf/EQM0213.pdf Macpherson, A., Elliot, M., Harris, I., & Homan, G. (2004). E-learning: Reflections and evaluation of corporate programmes. Human Resource Development International, 7(3), 295-313. Mandinach, E. B. (2005). The development of effective evaluation methods for e-learning: A concept paper and action plan. Teachers College Record, 107(8), 1814-1835. Massy, J. (2002). Quality and e-learning in Europe. Reading: BizMedia Ltd. McGorry, S. Y. (2003). Measuring quality in online programs. The Internet and Higher Education, 6, 159-177. McNaught, C., & Lam, P. (2005). Building an evaluation culture and evidence base for e-learning in three Hong Kong universities. British Journal of Educational Technology, 36(4), 599-614. Melton, R. (2002). Planning and developing open and distance learning: A framework for quality. London: RoutledgeFalmer.
E-QUAL
Meyer, K. A. (2002). Quality in distance education: Focus on online learning. ASHE-ERIC Higher Education Report, 29(4), 1-121. Nicolaou, C. T., Nicolaidou, I. A., & Constantinou, C. P. (2005). The e-learning movement as a process of quality improvement in higher education. Educational Research and Evaluation, 11(6), 605-622. Pawlowski, J. M. (2006). Adopting quality standards for education. CEDEFOP. Pond, W. K. (2002). Twenty-first century education and training: Implications for quality assurance. The Internet and Higher Education, 4, 185-192. Rovai, A. F. (2003). A practical framework for evaluating online distance education programs. The Internet and Higher Education, 6(2), 109124. Rumble, G. (2000). The globalisation of open and flexible learning: Considerations for planners and managers. Online Journal of Distance Learning Administration, 3(3). Retrieved October 30, 2007, from http://www.westga.edu/~distance/ojdla/ fall33/rumble33.html Schifter, C., Greenwood, L., & Monolescu, D. (2004). The distance education evolution: Issues and case studies. Information Science Publishing. Shifrin, T. (2006, March 7). International standard aimed at improving quality of e-learning. Computer Weekly, 76.
Tulloch, J. B., & Sneed, J. R. (2000). Quality enhancing practices in distance education: Teaching and learning. Washington, DC: Instructional Telecommunications Council. Welber, M. (2002). How AT&T adapted Kirkpatrick’s evaluation tools to e-learning then applied the same rigor to selecting vendors. ELearning, 3(6), 1-3. Wild, R. H., & Hope, B. (2003). DATQUAL: A prototype e-learning application to support quality management practices in service industries. TQM & Business Excellence, 14(6), 695-713. Willging, P. A. (2004). Factors that influence students’ decision to drop out of online courses. The Journal of Asynchronous Learning Networks, 8(4). Retrieved October 30, 2007, from http://www. sloan-c.org/publications/jaln/v8n4/v8n4_willging_member.asp Wirth, M. A. (2005). Quality management in elearning: Different paths, similar pursuits. Paper presented at 2nd International SCIL Congress. Retrieved October 30, 2007, from http://www. scil.ch/congress-2005/programme-10-11/docs/ workshop-1-wirth-text.pdf
endnotes 1 2
3
Sonwalkar, N. (2002). A new methodology for evaluation: The pedagogical rating of online courses. Syllabus, 15(6), 18-21.
A test used to measure performance. Content can be used in different learning contexts with different goals. Data transfer capacity or speed of transmission of a digital communications system.
0
Compilation of References
ADEIT. (2002). Meca-ODL -methodological guide for the analysis of quality in open and distance learning delivered via Internet. Project Socrates-Minerva, European Commission. AFNOR. (2004). Code of practice: Information technologies—E-learning guidelines. Retrieved October 30, 2007, from http://www.fffod.org/fr/doc/RBPZ76001-EN.doc AFT. (2000). Distance education: Guidelines for good practice. American Federation of Teachers. Retrieved October 30, 2007, from http://www.aft.org/higher_ed/ pubs-reports/reportslist.htm Agra, M. J., Gewerc, A., & Montero, M. L. (2003). El portafolios como herramienta de análisis en experiencias de formación online y presenciales. Enseñanza, 23, 101-114. Agra, M.J., Gewerc, A., & Montero, M.L. (2002). El portafolios como herramienta de análisis en experiencias de formación online y presenciales II Congreso Europeo de Tecnologías de la Información en la Educación y en la Ciudadanía. Barcelona. Ahmad, A., Basir, O., & Hassanein, K. (2004). Adaptive user interfaces for intelligent e-learning: Issues and trends. In Proceedings of the 4th International Conference on Electronic Business (ICEB2004) (pp. 925-934). Albano, G. (2005). Mathematics and e-learning: Students’ beliefs and waits. In International Commission for the Study and Improvement of Mathematics Education 57 Congress, Changes in Society: A Challenge for Mathematics Education (pp. 153-157). Piazza Armerina: Università di Palermo Press.
Albano, G. (2006). A case study about mathematics and e-learning: First investigations. In International Commission for the Study and Improvement of Mathematics Education 58 Congress, Changes in Society: A Challenge for Mathematics Education (pp. 146-151). Plezeň: University of West Bohemia Press. Albano, G., Bardelle, C., & Ferrari, P. L. (2007). The impact of e-learning on mathematics education: Some experiences at university level. La matematica e la sua didattica, 21(1), 61-66. Albano, G., Gaeta, M., & Salerno, S. (2006). E-learning: A model and process proposal. International Journal of Knowledge and Learning, 2(1/2), 73-88. Alexander, B. (2006). Web 2.0: A new wave of innovation for teaching and learning? EDUCAUSE Review, 41(2), 32-44. Allen, I. E., & Seaman, J. (2004). Sizing the opportunity: The quality and extent of online education in the US, 2002 and 2003. Needham, MA: Sloan-C. Allen, T. D., McManus, S. E., & Russell, J. E. A. (1999). Newcomer socialization and stress: Formal peer relationships as a source of support. Journal of Vocational Behaviour, 54(3), 453-470. Anderson, E.M., & Shannon, A.L. (1995). Towards a conceptualization of mentoring. In T. Kerry & A.S. Mayes (Eds.), Issues in mentoring. London: A.S. Routledge. Anderson, G. (2006). E-learning 2.0 is about people. Konferenz Professionelles Wissensmanagement - Erfahrungen und Visionen Live von der ICL 2006». Retrieved October 25, 2007, from http://elearningblog.tugraz. at/archives/130
Copyright © 2008, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
Compilation of References
Argyris, C. (2004). Double-loop learning and implementable validity. In H. Tsoukas & N. Mylonopoulos (Eds.), Organizations as knowledge systems: Knowledge, learning, and dynamic capabilities (pp. 29-45). New York: Palgrave Macmillan. Argyris, C., & Schon, D.A. (1996). Organizational learning II. Reading, MA: Addison-Wesley Publishing Company. Armitage, S., & O’Leary, R. (2003). A guide for learning technologists. Learning and teaching support network. York, UK: LTSN Generic Center. Asgari, M., & O’Neill, K. (2004). What do they mean by success? Contributors to perceived success in a telementoring program for adolescents. Paper presented at the Annual Meeting of the American Educational Research Association, San Diego, CA. Ashton, H.S., Beevers, C. E., Milligan, C. D., Schofield, D. K., Thomas, R. C., & Youngson, M. A., (2006). Moving beyond objective testing in online assessment. In S.L. Howell & M. Hricko (Eds.), Online assessment & measurement. Case studies from higher education, K-12 and Corporate (pp. 116-128). Hershey, PA: Idea Group. Assessment Reform Group. (1999). Assessment for learning: Beyond the black box. Cambridge, UK: University of Cambridge School of Education. Retrieved October 29, 2007, from http://arg.educ.cam.ac.uk/AssessInsides. pdf ASTD & NGA. (2001). A vision of e-learning for America’s workforce. American Society for Training & Development. Retrieved October 30, 2007, from http://www.astd.org/NR/rdonlyres/8C76F61D-15FD4C57-8554-D7E940A59009/0/pp_ jh_ver.pdf ASTD. (2006). E-learning courseware certification. American Society for Training & Development. Retrieved October 30, 2007, from http://www.astd.org/astd/marketplace/ecc ASTD. (2007). The ASTD institute e-learning courseware certification (ECC) standards. American Society for Training & Development. Retrieved October 30, 2007, from http://workflow.ecc-astdinstitute.org/index. cfm?sc=help&screen_name=cert_view
Augar, N., Raitman, R., & Zhou, W. (2004). Teaching and learning online with wikis. Paper presented at the ASCILITE Australasian Society for Computers in Learning in Tertiary Education 2004 Conference. Perth, WA. Avgerou, C. (2001). The significance of context in information systems and organizational change. Information Systems Journal, 11, 43-63. Avgerou, C., & Madon, S. (2004). Framing IS studies: Understanding the social context of IS innovation. In C. Avgerou, C. U. Cibbora & F. F. Land (Eds.), The social study of ICT (pp. 162-182). Oxford: Oxford University Press. Avison, D., & Fitzgerald, G. (2003). Information systems development: Methodologies, techniques and tools (3rd ed.). Maidenhead: McGraw-Hill. Avison, D.E., & Wood-Harper, A.T. (1990). Multiview: An exploration in information systems development. Henley-on-Thames: Alfred Waller. Ayersman, D. J. (1996). Reviewing the research on hypermedia-based learning. Journal of Research on Computing in Education, 28(4), 501-525. Balacheff, N. (2000). Teaching, an emergent property of e-learning environments. The Information Society for All. (IST 2000). Retrieved October 21, 2007, from http://www-didactique.imag.fr/Balacheff/TextesDivers/IST2000.html Balacheff, N., & Sutherland, R. (1999). Didactical complexity of computational environments for the learning of mathematics. International Journal of Computers for Mathematical Learning, 4, 1-26. Baldacci, M. (1999). L’individualizzazione. Basi psicopedagogiche e didattiche. Bologna: Pitagora. Banathy, B.H. (1996). Systems inquiry and its application in education. In D.H. Jonassen (Ed.), Handbook of research for educational communications and technology (pp. 74-92). New York: Prentice Hall. Banathy, B.H. (1999). Systems thinking in higher education: Learning comes to focus. Systems Research and Behavioral Science, 16, 133-145.
Compilation of References
Banta, T. W. (Ed.). (2003). Portfolio assessment: Uses, cases, scores and impact. San Francisco: Jossey-Bass.
for reading. In 9th International Conference on User Modelling (pp. 303-312).
Barberà, E. (2006). Los fundamentos teóricos de la tutoría presencial y en línea: Una perspectiva socioconstructivista. In J.A. Jerónimo Montes & E. Aguilar Rodríguez (Eds.), Educación en red y tutoría en línea (pp. 161-180). Mexico: UNAM FES-Z.
BECTA. (2006). Retrieved October 29, 2007, from www. becta.org.uk
Barberà, E., & Badia, A. (2004). Educar con aulas virtuales. Orientaciones para la innovación en el proceso de enseñanza y aprendizaje. Barcelona: A. Machado Libros. Barker, K. (2002). Canadian recommended e-learning guidelines. CACE. Retrieved October 19, 2007, http:// www.futured.com/pdf/CanREGs%20Eng.pdf Barker, K., Trafalis, T., & Rhoads, T.R. (2004). Learning from student model. In System and Information Engineering Design Symposium (pp. 79-86). Barnes, K., Marateo, R., & Ferris, S. (2007). Teaching and learning with the net generation. Innovate, 3(4). Retrieved October 18, 2007, from http://www.innovateonline.info/index.php?view=article&id=382 Barragán Sánchez, R. (2005). El Portafolio, metodología de evaluación y aprendizaje de cara al nuevo Espacio Europeo de Educación Superior. Una experiencia práctica en la Universidad de Sevilla, Revista Latinoamericana deTecnología Educativa, 4(1), 121-139. Retrieved October 28, 2007, from http://www.unex.es/didactica/RELATEC/ sumario_4_1.htm Bastiaens, T., & Martens, R. (2000). Conditions for Web-based learning with real events. In B. Abbey (Ed.), Instructional and cognitive impacts of web-based education (pp. 1-32). London: Idea Group Publishing. Bayne, R. (1995). MBTI: A critical review. London: Chapman & Hall. Beck, J.E., & Woolf, B.P. (1998). Using a learning agent with a student model. Lecture Notes in Computer Science, 1452, 6-15. Beck, J.E., Jia, P., Sison, J., & Mostow, J. (2003). Predicting student help-request behavior in an intelligent tutor
Behrens, J.T., Collison, T.A., & DeMark, S. (2006). The seven C’s of comprehensive online assessment: Lessons learned from 36 million classroom assessments in the Cisco networking academy program. In S.L Howell & M. Hricko (Eds.), Online assessment and measurement. Case studies from higher education, K-12 and corporate (pp. 229-245). London: Information Science Publishing. Beitler, M. A., & Frady, D. A. (2002). E-learning and e-support for expatriate managers. In H. B. Long & Associates (Eds.), Twenty-first century advances in selfdirected learning (CD). Boynton Beach, FL: Motorola University. Bellinger, A. (2004). Good course, bad course. Retrieved October 30, 2007, from http://www.trainingfoundation. com/articles/default.asp?PageID=1844 Berlecon Research. (2001). Wachstumsmarkt e-learning: Anforderungen, akteure und perspektiven im Deutschen markt. Berlin: Berlecon Research. Retrieved October 30, 2007, from http://www.berlecon.de/studien/e-Learning/index.html Berman, S. H., & Pape, E. (2001). A consumer’s guide to online courses. School Administrator, 58(9), 14. Bernabé, A. (2004). Blended learning. Conceptos básicos. Pixel-Bit. Revista de Medios y Educación, 23, 7-20. Retrieved October 18, 2007, from www.lmi.ub.es/ personal/bartolome/articuloshtml/04_blended_learning/documentacion/1_bartolome.pdf Bezdek, J.C., & Pal, S.K (1992). Fuzzy models for pattern recognition: Methods that search for structures in data. New York: IEEE Press. Bierema, L. L., & Merriam, S. B. (2002). E-mentoring: Using computer mediated communication to enhance the mentoring process. Innovative Higher Education, 26(3), 211-227.
Compilation of References
Biggs, J. (1996). Enhancing teaching through constructive alignment. Higher Education, 32, 347-364. Biggs, J. (1998). Assessment and classroom learning: A role for summative assessment? Assessment in Education, 5(1), 103-110. Biggs, J. (1999). Teaching for quality learning at university: What the student does. Buckingham: Open University Press. Biggs, J., & Tang, C. (1997). Assessment by portfolio: Constructing learning and designing teaching. Research and Development in Higher Education, 79-87. Biggs, J.B. (2003). Teaching for quality learning at university (2nd ed.). Buckingham: SRHE & Open University Press. Bishop, J. (2006, February). Training talk newsletter. Retrieved October 16, 2007, from http://www.dest.gov. au/sectors/training_skills/publications_resources/trainingtalk/issue_20/ Black, P., & William, D. (1998). Assessment and classroom learning, Assessment in Education, 5(1), 7-74. Blackboard. (2006). Blackboard unveils blackboard beyond initiative. Four bold inaugural projects will advance e-learning 2.0 vision. Retrieved October 25, 2007, from http://www.blackboard.com/company/press/ release.aspx?id=823603 Blair, A. (2004, May 3). Speech to NAHT conference. Retrieved October 16, 2007, from http://www.number10. gov.uk/output/Page5730.asp Blamire, R. (2006). Insight blog. The online diary of European schoolnet’s insight team. Retrieved October 25, 2007, from http://blog.eun.org/insightblog/2006/06/ elearning_20.html Blinco, K., Mason, J., McLean, N., & Wilson, S. (2004, July 19). Trends and issues in e-learning infrastructure development. A White paper for alt-i-lab 2004, prepared on behalf of DEST (Australia) and JISC-CETIS (UK) (Version 2). Retrieved October 19, 2007, from http://www.jisc.ac.uk/uploaded_documents/Altilab04infrastructureV2.pdf
Blurton, C. (2000). New directions of ICT-use in education. UNESCO World Communication and Information Report. Paris: UNESCO. Boley, H. (2003). RACOFI: A rule-applying collaborative filtering system. In 2003 IEEE/WIC International Conference on Web Intelligence/Intelligent Agent Technology. Bonabeau, E., Dorigo, M., & Theraulaz, G. (1999). Swarm intelligence: From natural to artificial Systems. NY: Oxford University Press Booth, R., & Berwyn, C. (2003). The development of quality online assessment in vocational education and training. Leabrook, Australia: Australian Flexible Learning Framework. Retrieved October 29, 2007, from www. ncver.edu.au/research/proj/nr1F02_1.pdf Borst, W.N. (1997). Construction of engineering ontologies for knowledge sharing and reuse. University of Twenty, NL-Centre for Telemática and Information Technology. Boud, D., & Falchikov, N. (1989). Quantitative studies of self-assessment in higher education: A critical analysis of findings. Higher Education, 18(5), 529-549. Boud, D., Cohen, R., & Sampson, J. (1999). Peer learning and assessment. Assessment and Evaluation in Higher Education, 24(4), 413-426. Boyd-Barrett, O. (2000). Distance education provision by universities: How institutional context affect choices. Information Communication & Society, 3(4), 474-493. Brahm, C., & Kleiner, B. H. (1996). Advantages and disadvantages of group decision-making approaches. Team Performance Management, 2(1), 30-35. Bransford, J.D., Brown, A.L., & Cocking, R.R. (Eds.). (1999). How people learn: Brain, mind, experience and school. Washington: National Academic Press. Brian, L. (2004) Taking a walk on the wiki side. Campus Technology. Retrieved October 25, 2007, from http:// www.campustechnology.com/article.asp?id=9200 Briggs, J. (2003). Rich pictures of UK education. Retrieved October 16, 2007, from http://www.reengage. org/go/Article_111.html
Compilation of References
Brightman, H. J. (2006). GSU master teacher program: On critical thinking. Retrieved October 26, 2007, from http://www2.gsu.edu/~dschjb/wwwcrit.html Britain, S. (2004, May). A review of learning design: Concept, specifications and tools. JISC. Retrieved October 19, 2007, from http://www.jisc.ac.uk/uploaded_documents/ACF1ABB.doc Brookhart, S. M. (2001). Successful students’ formative and summative use of assessment information. Assessment in Education, 8(2), 153-169. Brooks, B. A., & Madda, M. (1999). How to organize a professional portfolio for staff and career development. Journal for Nurses in Staff Development, 15(1), 5-10. Brousseau, G. (1997). Theory of didactical situations in mathematics. Kluwer Academics Publisher. Brown, E., Cristea, A., Stewart, C., & Brailsford, T. (2005). Patterns in authoring of adaptive educational hypermedia: A taxonomy of learning styles. Educational Technology & Society, 8(3), 77-90. Retrieved October 26, 2007, from http://www.ifets.info/journals/8_3/8.pdf Brown, E., Gibbs, G., & Glover, C. (2003). Evaluation tools for investigating the impact of assessment regimes on student learning. Bioscience Education E-Journal, 2. Retrieved October 28, 2007, from http://bio.ltsn. ac.uk/journal/vol2/beej-2-5.htm Brown, G., Bull, J., & Pendleburg, M. (1997). Assesing student learning in higher education. London: Routledge. Brown, J. S., & Duguid, P. (1998). Universities in the digital age. In B. L. Hawkins & P. Battin (Eds.), The mirage of continuity: Reconfiguring academic information resources for the 21st century (pp. 39-60). Washington, DC: Council on Library and Information Resources. Brown, S., & Glasner, A. (Ed.). (1999). Assessment matters in higher education. UK: Open University Press. Brown, S., & Knight, P. T. (1994). Assessing learners in higher education. London: Kogan Page. Bruner, J. (1997). La educación, puerta de la cultura. Madrid, Spain: Visor.
Bruner, J. (1998). Desarrollo cognitivo y educación. Madrid, Spain: Morata. Bryndum, S., & Montes, J. A. (2005). La motivación en los entornos telemáticos. RED Revista de Educación a Distancia, V(13). Retrieved October 26, 2007, from http://www.um.es/ead/red/13/ Buchanan, T. (2000). The efficacy of a world-wide Web mediated formative assessment. Journal of Computer Assisted Learning, 16, 193-200. Bull, J. (1999). Computer-assisted assessment: Impact on higher-education institutions. Educational Technology & Society, 2(3), 123-126. Bull, J., & Mckenna, C. (2001). Blueprint for CAA. Loughborough: University of Loughborough. Burd, B. A., & Buchanan, L. E. (2004). Teaching the teachers: Teaching and learning online. Reference Services Review, 32(4), 404-412. Burkhalter, B.B. (1996). How can institutions of higher education achieve quality within the new economy? Total Quality Management, 7, 153-160. Burr, J.T. (1993, March). A new name for a not so new concept. Quality Progress, 87-88. Burr, L., & Spennemann, D. H. R. (2004). Patterns of user behaviour in university online forums. International Journal of Instructional Technology and Distance Learning, 1(10), 11-28. Buzzetto-More, N. A., & Pinhey, K. (2006). Guidelines and standards for the development of fully online learning objects. Interdisciplinary Journal of Knowledge and Learning Objects, 2, 95-104. Cabena, P., Hadjnian, P., Stadler, R., Verhees, J., & Zanasi, A. (1997). Discovering data mining: From concept to implementation (IBM Books). Pearson Education. Cabero, J. (2004). Bases pedagógicas del e-Learning. Revista de Universidad y Sociedad del conocimiento, 3. Retreived October 18, 2007, from http://www.uoc. edu/rusc
Compilation of References
Cabero, J. (2006). Comunidades virtuales para el aprendizaje. Su utilización en la enseñanza. Edutec, 20. Retrived October 18, 2007, from http://www.uib.es/depart/gte/gte/edutec-e/revelec20/cabero20.htm
Carruthers, J. (1993). The principles and practices of mentoring. In B.J. Caldwell & E.M.A. Carter (Eds.), The return of the mentor: Strategies for workplace learning. London: Falmer Press.
Calder, A. (2004). Online learning support: An action research project. James Cook University. Paper presented at 4th Pacific Rim First Year Experience Conference at Queensland University of Technology. Brisbane, Australia.
Cashion, J., & Palmieri, P. (2000). Quality in online learning: Learners views. Retrieved October 19, 2007, from http://flexiblelearning.net.au/nw2000/talkback/ p14-3.htm
Camacho, D., & R-Moreno, M.D. (2007). Towards and automatic monitoring for higher education learning design. International Journal of Metadata, Semantics, and Ontologies, 2(1), 1-10. Canós, L., & Mauri, J. J. (2005). Metodologías Activas para la Docencia y Aplicación de las Nuevas Tecnologías: una Experiencia. In URSI 2005. Retrieved October 26, 2007, from http://w3.iec.csic.es/ursi/articulos_gandia_2005/articulos/otros_articulos/462.pdf Caragea, D., Pathak, J., & Honavar, V. (2004). Learning classifiers from semantically heterogeneous data. Lecture Notes in Computer Science, 3291, 963-980. Carlsen, W., & Single, P. B. (2000). Factors related to success in electronic mentoring of female college engineering students by mentors working in industry. Paper presented at the Annual Meeting of the National Association for Research in Science Teaching, New Orleans, LA. Carro, R.M, Ortigosa, A., & Schlichter, J. (2003). A rule-based formalism for describing collaborative adaptive courses, KES2003. Lecture Notes in Artificial Intelligence, 2774, 147-178. Carro, R.M, Pulido, E., & Rodríguez, P. (1999). Designing adaptive Web-based courses with TANGOW. In G. Cumming, T. Okamoto & L. Gómez (Eds), Advanced research in computers and communications in education (pp. 147-178). Amsterdam: IOS Press. Carro, R.M., Pulido, E., & Rodríguez, P. (1999). Dynamic generation of adaptive Internet-based courses. Journal of Network and Computer Applications, 22, 249-257.
Castro, E. (2007). Moodle: Manual del professor. Retrieved October 28, 2007, from http://moodle.org/file. php/11/manual_del_profesor/Manual-profesor.pdf Cavanaugh, C. (2002). Distance education quality: The resources-practices-results cycle and the standards. Retrieved October 19, 2007, from http://www.unf. edu/~caavanau/2569.htm Cave, M., Hanney, S., Kogan, M., & Trevett, G. (1988). The use of performance indicators in higher education: A critical analysis of developing practice. London: Jessica Kingslay. Cecez-Kecmanovic, D., & Webb, C. (2000). Towards a communicative model of collaborative Web-mediatic learning. Australian Journal of Educational Technology, 16(1), 73-85. Chadwick, P.A. (1994). University’s TQM initiative. In P. Nightingale & M. O’Neill (Eds.), Achieving quality learning in higher education (pp. 120-135). London: Kogan Page. Challis, D. (2005). Committing to quality learning through adaptive online assessment. Assessment & Evaluation in Higher Education, 30(5), 519-527. Challis, D. (2005, Fall). Towards the mature e-portfolio: Some implications for higher education. Canadian Journal of Learning and Technology, 31(3). Chang, C. C. (2002, March). Building a Web-based learning portfolio for authentic assessment. Paper presented at Proceedings International Conference on Computers in Education (ICCE’02), Melbourne, Australia. Chang, L. J., Yang, J. C., Yu, F. Y., & Chan, T. W. (2003). Development and evaluation of multiple competitive
Compilation of References
activities in a synchronous quiz game system. Journal of Innovations in Education and Training International. 40(1), 16-26.
Chiero, T. C. (1997). Teachers’ perspectives on factors that affect computer use. Journal of Research on Computing in Education, 30(2), 133-145.
Chang, S. -B., Wang, H. -Y., Liang, J. -K., Liu, T. -C., & Chan, T. W. (2004). A contest event in the connected classroom using wireless handheld devices. In J. Roschelle, T.-W. Chan, Kinshuk, & S. J. H. Yang (Eds.), Proceedings of the 2nd IEEE International Workshop on Wireless and Mobile Technologies in Education (WMTE 2004) (pp. 207-208). Los Alamitos, CA: IEEE Computer Society.
Christensen, E. W., Anakwe, U. P., & Kessler, E. H. (2001). Receptivity to distance learning: The effect of technology, reputation, constraints, and learning preferences. Journal of Research on Computing in Education, 33(3), 263-370.
Charman, D. (2005). Issues and impacts of using computer-based assessments (cbas) for formative assessment. In S. Brown, J. Bull & P. Race (Eds.), Computer-assisted assessment in higher education (pp. 85-93). Eastbourne: Routledge. Checkland, P. (1981). SystemsThinking, Systems practice. Chichester: John Wiley. Checkland, P. (2000). Soft systems methodology: A 30year retrospective. Systems Research and Behavioral Science, 17, S11-S58. Checkland, P., & Holwell, S. (1998). Information, systems and information systems. John Wiley and Sons. Checkland, P., & Scholes, J. (1990). Soft systems methodology in action. John Wiley & Sons. Cheng, Y. C. (2002). Linkage between innovative management and student-centred approach: Platform theory for effective learning. Paper presented at the Second International Forum on Education Reform: Key Factors in Effective Implementation, Bangkok, Thailand. Cheng, Y.C. (1996). The pursuit of school effectiveness: Theory, policy, and research. Hong Kong: Hong Kong Institute of Educational Research, The Chinese University of Hong Kong. Chickering, A. W., & Ehrmann, S. C. (1996, October). Implementing the seven principles: Technology as lever. American Association for Higher Education Bulletin, 49(2), 3-6.
Chu, K., Chang, M., & Hsia, Y. (2004). Stimulating students to learn with accuracy counter based on competitive learning. In Proceedings of the IEEE International Conference on Advanced Learning Technologies (ICALT’04) (pp. 786-788). IEEE Computer Society. Cichocki, A., & Amari, S. (2001). Adaptive blind signal and image processing: Learning algorithms and applications. New York: John Wiley & Sons. Clariana, R. B. (1997). Considering learning style in computer-assisted learning. British Journal of Educational Technology, 28(1), 66-68. Clark, R.E. (2001). New directions: Evaluating distance education technologies. In R.E. Clark (Ed.), Learning from media: Arguments, analysis, and evidence (pp. 125136). Greenwich, CT: Information Age Publishing. Coffield, F., Moseley, D., Hall, E., & Ecclestone, K. (2004). Learning styles and pedagogy in post-16 learning: A systematic and critical review. London: Learning and Skills Research Centre. Cohen, M. D., & March, J. D. (1974). Leadership and ambiguity: The American college president. New York: McGraw-Hill. Coll, C., Martín, E., Mauri, T., Miras, M., Onrubia, J., Solé, I., et al. (2005). El constructivismo en el aula, Vol. 111 (15th ed.). Barcelona: Graó. Collis, B., De-Boer, W., & Slotman, K. (2001). Feedback for Web-based assignments. Journal of Computer Assisted Learning, 17, 306-313. Comeaux, P. (Ed.). (2002). Communication and collaboration in the online classroom: Examples and applications. Bolton, MA: Anker.
Compilation of References
Comezaña, O., & García, F. J. (2005). Plataformas para educación basada en Web: Herramientas, procesos de evaluación y seguridad (Tech Rep. DPTOIA-IT-2005001). España, Salamanca: Universidad de Salamanca, Departamento de Informática y Automática. Computer Adaptive Assessment Project. (2005). What is CAA? Retrieved October 29, 2007, from http://www. castlerockresearch.com/caa/WhatisCAA.aspx Computing: Work-Life Balance. (2007). The Economist, 23/12/06-5/1/07, 99-100. Connolly, M., Jones, N., & O’Shea, J. (2005). Quality assurance and e-learning: Reflections from the front line. Quality in Higher Education, 11(1), 59-67. Conway, C. (1998). Strategies for mentoring: A blueprint for successful organizational development. New York: John Wiley and Sons. Cookson, P. (2002). The hybridization of higher education. International Review of Research in Open and Distance Learning, 2(2), 1-4. Cooper, T. (1996). Portfolio assessment in higher education. In Proceedings Western Australia Institute for Educational Research Forum 1996. Retrieved October 28, 2007, from http://www.waier.org.au/forums/1996/ cooper.html Cooper, T. (1997). Portfolio assessment: A guide for students. Perth, WA: Praxis Education. Cooper, T. (1999). Portfolio assessment: A guide for lecturers teachers and course designers. Perth, WA: Praxis Education. Cooper, T., & Emden, C. (2000). Portfolio assessment: A guide for nurses and midwives. Perth, WA: Praxis Education. Cooper, T., & Love, T. (2000). Portfolios in universitybased design education. In C. Swann & E. Young (Eds.), Re-inventing design education in the university (pp. 159166). Perth, WA: School of Design, Curtin University. Cooper, T., & Love, T. (2001). Online portfolio assessment in information systems. In S. Stoney & J. Burn
(Eds.), Working for excellence in the e-conomy (pp. 417-426). Perth, WA: We-B Research Centre, Edith Cowan University. Cooper, T., & Love, T. (2001). Online portfolios: Issues of assessment and pedagogy. In International Education Research Conference, Melbourne. Retrieved October 29, 2007, from http://www.aare.edu.au/01pap/coo01346. htm Cooper, T., & Love, T. (2002). Online portfolios: Issues of assessment and pedagogy. In P. Jeffrey (Ed.), AARE 2001: Crossing borders: New frontiers of educational research. Coldstream, Victoria: AARE Inc. Cooper, T., Hutchins, T., & Sims, M. (1999). Developing a portfolio which demonstrates competencies. In M. Sims and T. Hutchins (Eds.), Learning materials: Certificate in children’s Services; 0-6 years (bilingual support) (pp. 3-29). Perth, WA: Ethnic Childcare Resource Inc. Western Australia. Costa, D., Hertz, A., & Dubious, O. (1995). Embedding of a sequential algorithm within an evolutionary algorithm for coloring problems in graphs. Journal of Heuristics, 1, 105-128. Council for Higher Education Accreditation. (2002). International quality review. Retrieved October 19, 2007, from http://www.chea.org/international/inter_summary02.html Covington, M. V. (1992). Making the grade: A self-worth perspective on motivation and school reform. Cambridge, UK: Cambridge University Press. Crompton, P. (1999). Evaluation: A practical guide to methods. Retrieved October 29, 2007, from http://www. icbl.hw.ac.uk/ltdi/implementing-it/eval.pdf Cronbach, L., & Snow, R. (1977). Aptitudes and instructional methods: A handbook for research on interactions. New York: Irvington. Crosby, P.B. (1979). Quality is free. New York: McGraw-Hill. Curtis, J. B. (2002). Collaborative tools for e-learning. Chief Learning Office. Solutions for Enterprise
Compilation of References
Productivity. Retrieved October 29, 2007, from http:// www.clomedia.com/content/templates/clo_feature. asp?articleid=41&zoneid=30 Dahlgaard, J., Kristensen, K., & Khanji, G. K. (2005). Fundamentals of total quality management. London: Routledge. Dahlgaard, J.J., Kristensen, K., & Kanji, G.K. (1995). TQM and education. Total Quality Management, 6(56).
Dewart, H., Drees, D., Hixenbaugh, P., & Williams, D. (2004, April 5-7). Electronic peer mentoring: A scheme to enhance support and guidance and the student learning experience. Paper presented at the Psychology Learning and Teaching Conference, University of Strathclyde, Glasgow, UK. Dewey, J. (1933). How we think. Boston, MA: Heath. Dewey, J. (1938). Experience and education. New York: Macmillan.
Daniels, S.E. (2002). First to the top. Quality Progress, 35(5), 41-53.
DfES. (2003). Widening participation in higher education. London: Department for Education and Skills.
Davies, M. (2001). Adaptive AHP: A review of marketing applications with extensions. European Journal of Marketing, 35(7), 872-893.
Di Martino, P., & Zan, R. (2002). An attempt to describe a negative attitude toward mathematics. In P. Di Martino (Ed.), Proceedings of the Mathematics Views—XI European Workshop: Research on Mathematical Beliefs (pp. 22-29). Pisa: Università di Pisa Press.
Davis, F. D. (1989). Perceived usefulness, perceived ease of use and user acceptance of information technology. Management Information Systems Quarterly, 13, 319-340. Davis, S. L., & Morrow, A. K. (2004). Creating usable assessment tools: A step-by-step guide to instrument design. Retrieved October 29, 2007, from http://www. jmu.edu/assessment/wm_library/ID_Davis_Morrow_AAHE2004.pdf
Diaz, D., & Cartnal, R. (2006). Term length as an indicator of attrition in online learning. Innovate, 2 (5). Retrieved October 18, 2007, from http://www.innovateonline. info/index.php?view=article&id=196 DIN. (2004). PAS 1032-1, Deutsches institut für normung e. V. Retrieved October 30, 2007, from http://www.din. de
De Smet, M., Van Keer, H., & Valcke, M. (2008). Blending asynchronous discussion groups and peer tutoring in higher education: An exploratory study of online peer tutoring behaviour. Computers and Education, 50(1), 207-223.
Directorate-General for Education and Culture, European Commission. (2004). Final Report. Study of the e-learning suppliers’ market in Europe. Danish Technological Institute, Massy, J., Alphametrics Ltd, & Heriot-Watt University.
Dearing, R. (1997). Higher education in the learning society. London: HMSO.
Distance Education and Training Council. (2002). DETC accreditation overview. Retrieved October 19, 2007, from http://www.detc.org/content/freePublications.html
DECT. (2007). Distance education and training council. Retrieved October 30, 2007, from http://www.dect.org Deming, W.E. (1989). Foundation for management of quality in the western world. New York: Perigee Books. Deshpande, M., & Karypis, G. (2004). Selective Markov models for predicting Web page accesses. ACM Transactions on Internet Technology (TOIT), 4(2), 163-184.
Dorigo, M., & Stützle, T. (2004). Ant colony optimization. MIT Press. Dougiamas, M., & Taylor, P.C. (2003). Moodle: Using learning communities to create an open source course management system. Paper presented at the Proceedings of the EDMEDIA 2003 Conference, Honolulu, Hawaii.
Compilation of References
Downes, S. (2005, October 17). E-learning 2.0. Elearn Magazine. Retrieved October 18, 2007, from http://elearnmag.org/subpage.cfm?section=articles&article=29-1 Downes, S. (2005). What e-learning 2.0 means to you. Paper presented at the meeting of the Transitions in Advanced Learning Conference, Ottawa. Downes, S. (2005). E-learning 2.0. eLearn Magazine. Retrieved October 25, 2007, from http://elearnmag.org/ subpage.cfm?section=articles&article=29-1 Driscoll, M. (2002). Blended learning: Let’s get beyond the hype. Learning and Training Innovations Newsline. Retrieved October 18, 2007, from http://www.ltimagazine. com/ltimagazine/article/articleDetail.jsp?id=11755 Driscoll, M.P. (2002). How people learn (and what technology might have to do with it). ERIC Digest, Syracuse University. Retrieved October 16, 2007, from http://www. ericdigests.org/2003-3/learn.htm Dron, J. (2002). Achieving self-organisation in network-based learning environments. PhD doctoral dissertation. Dron, J., Mitchell, R., Siviter, P., & Boyne, C (1999). CoFIND: Experiment in n-dimensional collaborative filtering. In World Conference on the WWW and Internet (pp. 301-306). Duda, R., Hart, P.E., & Stork, D.G. (2000). Pattern classification (2nd ed.). Wiley-Interscience. Dusick, D. (1998). What social cognitive factors influence faculty members’ use of computers for teaching. A literature review. Journal of Research on Computing in Education, 31(2), 123-137. Duval, R. (1995). Sémiosis et pensée humaine. Peter Lang. EADL. (2003). Quality guide. Quality guidelines to improve the quality of distance learning institutes in Europe (2nd ed.). European Association Distance Learning. Eby, L. T. (1997). Alternative forms of mentoring in changing organizational environments: A conceptual extension of the mentoring literature. Journal of Vocational Behaviour, 51, 125-144.
Educational Testing Service. (2006). ICT literacy assessment preliminary findings. Retrieved October 18, 2007, from http://www.ets.org/Media/Products/ICT_Literacy/ pdf/2006_Preliminary_Findings.pdf eduQua. (2005). Schweizerisches Qualitätszertifikat für Weiterbildungsinstitutionen. Retrieved October 30, 2007, from http://www.eduqua.ch EFMD. (2007). European quality improvement system. European Foundation for Management Development. Retrieved October 30, 2007, from http://www.efmd. be/equis EFQUEL. (2006). European foundation for quality in e-learning. Retrieved October 30, 2007, from http://www. qualityfoundation.org Ehlers, U. -D. (2004). Quality in e-learning. The learner as a key quality assurance category. European Journal Vocational Training, 29, 3-15. Ehlers, U. -D. (2007). Towards greater quality literacy in a e-learning Europe. e-Learning Papers, 2. Retrieved October 30, 2007, from http://www.e-Learningpapers. eu/index.php?page=doc&doc_id=8549&doclng=6 Ehlers, U. -D., Goertz, L., Hildebrandt, B., & Pawlowski, J. (2005). Quality in e-learning. Use and dissemination of quality approaches in European e-learning. A study by the European quality observatory (CEDEFOP Panorama Series, 116). Luxembourg: Office for Official Publications of the European Communities. Ehlers, U. -D., Hildebrand, B., Tescheler, S., & Pawlowski, J. (2004). Designing tools and frameworks for tomorrows quality development. In Quality in European E-Learning Designing Tools and Frameworks for Tomorrows Quality Development, Workshop European Quality Observatory (EQO) co-located to the 4th IEEE International Conference on Advanced Learning Technologies. Joensuu: European Quality Observatory. Ehrmann, S. C. (1995). Asking the right question: What does research tell us about technology and higher learning? Change, 17(2), 20-27.
Compilation of References
EIfEL. (2007). About SEEL, Supporting excellence in e-learning. European Institute for e-Learning. Retrieved October 30, 2007, from http://www.eife-l.org/activities/projects/seel El Louadi, M., Galletta, D.F., & Sampler, J.L. (1998). An empirical validation of a contingency model for information requirements determination. ACM SIGMIS Database archive, 29(3), 31-51. ELEX. (2005). EXEMPLO e-learning project. Retrieved October 30, 2007, from http://217.222.182.72/html/iess/ pt.htm Elton, L.R.B., & Laurillard, D.M. (1979). Trends in research on student learning. Studies in Higher Education, 4(1), 87-102. Ensher, E. A., Heun, C., & Blanchard, A. (2003). Online mentoring and computer-meadiated communication: New directions in research. Journal of vocational behaviour, 63, 264 - 288. Entwistle, N. (2003). University teaching-learning environments and their influences on student learning: An introduction to the ETL project. In Proceedings of the 10th Conference of the European Association for Research on Learning and Instruction (EARLI). Padova, Italy: EARLI. Entwistle, N., McCune, V., & Hounsell, J. (2002). Approaches to studying and perceptions of university teaching-learning environments: Concepts, measures and preliminary findings. Edinburgh: University of Edinburgh. Entwistle, N., Tait, H., & McCune, V. (2000). Patterns of response to an approach to studying inventory across contrasting groups and contexts. Paper presented at the European Journal of the Psychology of Education.
EQUEL. (2004). Virtual European centre in e-learning. Retrieved October 30, 2007, from http://equel.net EQUIPE. (2007). Aims & objectives. European Quality in Individualised Pathways in Education. Retrieved October 30, 2007, from http://www.EQUIPE.up.pt/ equipe/objectives.htm European Commission. (2003). E-learning programme, Education. Programmes, Europe. Retrieved October 30, 2007, from http://ec.europa.eu/education/programmes/eLearning/programme_en.html# European Commission. (2005). Common European principles for teacher competences and qualifications. Retrieved October 18, 2007, from http://europa.eu.int/ comm/education/policies/2010/doc/principles_en.pdf European Commission. (2005). E-learning, designing education tomorrow. Report on the consultation workshop. The “e” for our universities—Virtual campus. Organisational changes and economics models. DRAFT. Brussels: European Commission, Directorate-General for Education and Culture. European Commission. (2006, September 29). Benchmarking access and use of ICT in European schools 2006. Retrieved October 18, 2007, from http://ec.europa. eu/information_society/eeurope/i2010/docs/studies/final_report_3.pdf European Schoolnet. (2005, July 15). Insight special report on assessment schemes for teachers’ ICT competence—A policy analysis. Retrieved October 18, 2007, from http://www.e-Learningeuropa.info/index. php?page=doc&doc_id=6578&doclng=6 Evans, J.R., & Lindsay, W.M. (1999). The management and control of quality (4th ed.). Cincinnati, OH: SouthWestern College Publishing.
eQCheck. (2006). Qualite-learning assurance services ltd. Retrieved October 30, 2007, from http://www.eqcheck.co.uk
Evans, M. (2006). Goodbye Web 2.0, long live Web 3.0. Retrieved October 25, 2007, from http://evans.blogware. com/blog/_archives/2006/11/12/2493546.html
EQO. (2007). European quality observatory. Retrieved October 30, 2007, from http://www.eqo.info
Falchikov, N. (2001). Learning together: Peer tutoring in higher education. Falmer Press.
0
Compilation of References
Fandos, M., & González, A. P. (2005). Estrategias de Aprendizaje ante las Nuevas Posibilidades Educativas de las TIC. In A. Méndez-Vilas, B. Gonzalez, J. Mesa, & J. A. Mesa (Eds.), Proceedings of the Third International Conference on Multimedia and Information & Communication Technologies in Education (pp. 7-10). Cáceres, Spain: Formatex. Farrell, R., Liburd, S.D., & Thomas, J.C. (2004). Dynamic assembly of learning objects. In Proceedings of 13th International World Wide Web Conference, NY. Fayyad, U., Piatetsky-Shapiro, G., Smyth, P., & Uthurusamy, R. (1996). Advances in knowledge discovery and data mining. New York: The MIT Press. Feigenbaum, A.V. (1951). Quality control: Principles, practice, and administration. New York: McGrawHill. Felder, R., & Silverman, L. (1988). Learning and teaching styles. Journal of Engineering Education, 78(7), 674-681. Ferrari, P. L. (2004). Mathematical language and advanced mathematics learning. In M. Johnsen Høines & A. Berit Fuglestad (Eds.), Proceedings of the 28th Conference of the International Group for the Psychology of Mathematics Education (Vol. 2, pp. 383-390). Bergen, Norway: Bergen University College Press. Feuerstein, R. (1990). The theory of structural cognitive modifiability. In B.Z. Presseisen (Ed.), Learning and thinking styles: Classroom applications (pp. 68-134). Washington, DC: National Education Association. Feuerstein, R., Rand, Y., Hoffman, M. B, & Miller, R. (1980). Instrumental enrichment. Baltimore, MD: University Park Press. Figueiredo, A. D. (2002). Redes e Educação: A Surpreendente Riqueza de um Conceito. In Conselho Nacional de Educação (Ed.), Redes de Aprendizagem, Redes de Conhecimento. Lisboa: Conselho Nacional de Educação. Fikes, R., & Nilsson, N. (1971). Strips: A new approach to the application of theorem proving to problem solving. Artificial Intelligence, 2, 189-208.
FitzGerald, S. (2006, June 8). Social networking: Philosophy and pedagogy. 2006 Networks Community Forum. Edna, Australia. Retrieved October 25, 2007, from http://www.groups.edna.edu.au/mod/forum/discuss.php?d=6615 Flynn, A., Concannon, F., & Bheachain, C. N. (2005). Undergraduate students’ perceptions of technologysupported learning: The case of an accounting class. International Journal on E-Learning, 4(4), 427-444. Ford, P., Goodyear, P., Heseltine, R., Lewis, R., Darby, J., Graves, J., et al. (1996). Managing change in higher education: A learning environment architecture. Society for Research in Higher Education and Open University Press. Frees, S., & Kessler, G. D. (2004). Developing collaborative tools to promote communication and active learning in academia. In Proceedings of the 34th Annual Conference Frontiers in Education (FIE’04) (Vol. 3, pp. S3B/20 - S3B/25). Piscataway, NJ: IEEE. Freund, R. J., & Piotrowski, M. (2003). Mass customization and personalization in adult education and training. Paper presented at the 2nd Interdisciplinary World Congress on Mass Customization and Personalization MCPC2003, Munich, Germany. Fridman, N., & McGuinness, D. (2001). Ontology development: A guide to creating your first ontology (Rep. No. KSL-01-05, SMI-2001). FuturEd. (2002). Consumers guide to e-Learning. Retrieved October 19, 2007, from http://www.futured. com/pdf/ConGuide%20Eng%20CD.pdf Gambardella, L.M, Taillard E., & Agazzi G. (1999). MACS-VRPTW: A multiple ant colony system for vehicle routing problems with time windows. New Ideas in Optimization, 63-76. Gándara, M. (1995). La interfaz con el usuario: Una introducción para educadores. In Alvarez-Manilla and Bañuelos (Eds.), Usos educativos de la computadora. México: CISE/UNAM.
Compilation of References
García Aretio, L. (2003). Comunidades de aprendizaje en entornos virtuales. La comunidad iberoamericana de la CUED. In M. Barajas (Ed.), La tecnología educativa en la enseñanza superior. Madrid, Spain: McGrawHill. García Carrasco, J., Pérez, M.A., Rodríguez, B., & Sánchez, M.C. (2002). Evaluar en la red. Teoría de la Educación: Educación y Cultura en la Sociedad de la Información, 3(5). Retrieved June 11, 2007, from http:// www.usal.es/~teoriaeducacion/ Garcia, P., Amandi, A., Schiaffino, S., & Campo, M. (2007). Evaluting Bayesian networks’ precision for detecting students’ learning styles. Computers & Education, 49(3), 794-808. Garrison, D. R. (2003). Cognitive presence for effective asynchronous online learning: The role of reflective inquiry, self-direction and metacognition. In J. Bourne & J.C. Moore (Eds.), Elements of quality online education: Practice and direction (pp. 47-58). Needham, MA: Sloan-C. Garrison, D. R., & Kanuka, H. (2004). Blended learning: Uncovering its transformative potential in higher education. Internet and Higher Education, 7(2), 95-105. Garrison, D.R., & Anderson, T. (2003). E-learning in the 21st century: A framework for research and practice. London, New York: RoutledgeFalmer. Gasar, S., Bohanec, M., & Rajkovic, V. (2002). Combined data mining and decision support approach to the prediction of academic achievement. In Workshop on Integrating Aspects of Data Mining (pp. 41-52). Gerbic, P. (2002). Learning in asynchronous environments for on campus students. In R. Kinshuk, K. Lewis, R. Akahori, T. Kemp, L. Okamoto, C. Henderson & H. Lee (Eds.), Proceedings of the 9th International Conference on Computers in Education (Vol. 2, pp. 1492-1493), Auckland, New Zealand: Asia-Pacific Society for Computers in Education. Gibbs, G. (2006). Why assessment is changing. In C. Bryan & K. Clegg (Eds.), Innovative assessment in higher education (pp. 11-22). New York: Routledge.
Gibbs, G., & Simpson, C. (2003). Does your assessment support your students’ learning? Journal of Learning and Teaching in Higher Education, 1(1). Gibbs, G., & Simpson, C. (2004). Measuring the response of students to assessment: The assessment experience questionnaire. In C. Rust (Ed.), Improving student learning: Theory, research and scholarship. Oxford: Oxford Centre for Staff and Learning Development. Gibbs, G., & Simpson, C. (2005). Conditions under which assessment supports students’ learning. Learning and Teaching in Higher Education 1, 3-31. Gibbs, G., Simpson, C., & Macdonald, R. (2003). Improving student learning through changing assessment—A conceptual and practical framework. Paper presented at the European Association for Research into Learning and Instruction Conference, Padova, Italy. Gifford, B.R., & Enyedy, N. (1999). Activity centered design: Towards a theoretical framework for CSCL. In Proceedings of the Third International Conference on Computer Support for Collaborative Learning. Retrieved October 18, 2007, from http://www.gseis.ucla.edu/faculty/enyedy/pubs/Gifford&Enyedy_CSCL2000.pdf Ginns, P., & Ellis, R. (2007). Quality in blended learning: Exploring the relationships between online and face-to-face teaching and learning. Internet and Higher Education, 10(1), 53-64. Gipps, C. (1994). Beyond testing: Towards a theory of educational assessment. London: Falmer Press. Gisbert, M. (2004). Las TIC como motor de innovación de la Universidad. En SANGRÀ, A. Y GONZÁLEZ, M. (coord.): Barcelona. Ed. UOC. In A. Sangrà & M. Gonz’alez (Eds.), La transformación de las universidades a través de las TIC: Discursos y prácticas (pp. 193-197). Barcelona: Ed. UOC. Global University Alliance. (2000). About GUA. Retrieved October 19, 2007, from http://www.gua.com/shell/gua/ index.asp Godoy, L. A. (2005). Learning-by-doing in a Web-based simulated environment. In Proceedings of the 6th Inter-
Compilation of References
national Conference on Information Technology Based Higher Education and Training (ITHET 2005) (pp. F4C/7 - F4C/10). Piscataway, NJ: IEEE. Gomez-Perez, A., & Manzano-Macho, D. (2004). An overview of methods and tools for ontology learning from texts. Knowledge Engineering Review, 19(3), 187-212. Goode, V. L. (2003). Lifestyle in the balance. Chartered Accountants Journal, 82(3), 22-24. Goodyear, P. (2001). Effective networked learning in higher education: Notes and guidelines (Deliverable 9) (Vol. 3). Lancaster: CSALT, Lancaster University. Goodyear, P. (2002). Online learning and teaching in the arts and humanities: Reflecting on purposes and design. In E.A. Chambers & K. Lack (Eds.), Online conferencing in the arts and humanities (pp. 1-15). Milton Keynes: Institute of Educational Technology, Open University. Goodyear, P. (2005). Educational design and networked learning: Patterns, pattern languages and design practice. Australasian Journal of Educational Technology, 21(1), 82-101. Gould, S. J. (1978). Ever since Darwin - reflections in natural history. Burnett. Grant, L. (2006). Using wikis in schools: A case study. Retrieved October 25, 2007, from http://www.futurelab. org.uk/research/discuss/05discuss01.htm Gray, M. M., & Gray, W. A. (1990). Planned mentoring: Aiding key transitions in career development. Career Planning and Adult Development Journal, 6(3), 27-32. Green, D. (Ed.). (1994). What is quality in higher education? (pp. 3-20). Buckingham, UK: Open University Press. Gruber, T.R. (1995). Towards principles for the design of ontologies used for knowledge sharing. International Journal of Human-Computer Studies, 43, 907-928. Guldberg, K., & Pilkington, R. (2007). Tutor roles in facilitating reflection on practice through online discussion. Educational Technology and Society, 10(1), 61-72.
Guthrie, W.K.C. (1971). The sophists. London: Cambridge University Press. Gutiérrez, S., Pardo, A., & Delgado Kloos, C. (2006). A modular architecture for intelligent Web resource based tutoring systems. Intelligent Tutoring Systems, 753-755. Gutiérrez, S., Pardo, A., & Delgado Kloos, C. (2006). Some ideas for the collaborative search of the optimal learning path. In Adaptive Hypermedia 2006 (pp. 430434). Gutiérrez, S., Valigiani, G., Jamont, Y., Collet, P., & Delgado Kloos, C. (2007). A swarm appoach for automatic auditing of pedagogical planning. In Proceedings of IEEE ICALT 2007 (pp. 136-138). Hainaut, J., Tonneau, C., Joris, M., & Chandelon, M. (1993). Transformation based database reverse engineering. In R. Elmasri, V. Kouramajian & B. Thalheim (Eds.), Conference on Entity Relationship Approach (pp. 364-375). Springer. Haladyna, T. (1997). Writing test items to evaluate higher order thinking. Needham Heights, MA: Allyn & Bacon. Haldiki, M., Batistakis, Y., & Vazirgiannis, M. (2001). On clustering validation techniques. Journal of Intelligent Information Systems, 17(2-3), 107-145. Halperin, R. (2005). Learning technology in higher education: A structurational perspective on technology-mediated learning practices (Doctoral dissertation). London: London School of Economics. Ham, C.L. (2003). Service quality, customer satisfaction, and customer behavioral intentions in higher education. Published doctoral dissertation AAT 3090234. Nova Southeastern University, FL. Hamilton, B. A., & T.A., S. (2003). E-mentoring: Implications for organizational learning and development in a wired world. Organizational Dynamics, 31(4), 388-402.
Compilation of References
Hara, N., & Kling, R. (2000). Student distress in a Web-based distance education course. Information, Communication and Society, 3(4), 556-579. Hardle, W., & Simar, L. (2006). Applied multivariante statical analysis. New York: Springer. Harrington, A. (1999). E-mentoring: The advantages and disadvantages of using e-mail to support distant mentoring. The Coaching and Mentoring Network Articles. Retrieved October 17, 2007, from http://www. coachingnetwork.org.uk/ResourceCentre/Articles/ ViewArticlePF.asp?artId=63 Harris, J., O’Bryan, E., & Rotenberg, L. (1996). It’s a simple idea, but it’s not easy to do! Practical lessons in telementoring. Learning and Leading with Technology. Retrieved October 17, 2007, from http://emissary.wm.edu/ templates/content/publications/October96LLT.pdf Hartley, J., & Sleeman, D. (1973). Towards more intelligent teaching systems. International Journal of ManMachine Studies, 2, 215-336. Harvey, L., & Knight, P.T. (1996). Transforming higher education. Buckingham, UK: Open University Press. Headlam-Wells, J., Gosland, J., & Craig, J. (2005). There’s magic in the Web: E-mentoring for women’s career development. Career Development International, 10(6-7), 444-459. Headlam-Wells. (2004). E-mentoring for aspiring women managers. Women in Management Review, 19(4), 212218. Heerema, D. L., & Rogers, R. L. (2001). Avoiding the quality/quantity trade-off. T.H.E. Journal, 29(5), 14-21. Helms, M.M., Williams, A.B., & Nixon, J.C. (2001). TQM principles and their relevance to higher education: The question of tenure and post-tenure review. The International Journal of Education Management, 14(6-7), 322-331. Henly, D. C. (2003). Use of Web-based formative assessment to support student learning in a metabolism/nutrition unit. Journal of Dental Education, 7(3), 116-122.
Hernández Nanclares, N. (2004). La evaluación mediante portafolio en Relaciones Económicas Internacionales. In R. Rodríguez, J. Hernández & S. Fernández (Eds.), Docencia universitaria: Orientaciones para la formación del profesorado (pp. 331-341). Documentos ICE, Instituto de Ciencias de la Educación, Universidad de Oviedo. Hernández Nanclares, N. (2006) El portafolios electrónico: Una alternativa para evaluar en la Universidad. Paper presented in I jornadas de innovación educativa de la Escuela Politécnica Superior de Zamora, junio Zamora, España. Hiltz, S. R. (1994). The virtual classroom: Learning without limits via computer networks. Worwood, NJ: Ablex. Hisham, N., Campton, P., & FitzGerald, F. (2004). A tale of two cities: A study on the satisfaction of asynchronous e-learning systems in two Australian universities. In R. Atkinson, C. McBeath, D. Jonas-Dwyer, & R. Phillips (Eds.), Beyond the comfort zone: Proceedings of the 21st ASCILITE Conference (pp. 395-402). Perth, Australia: ASCILITE. Retrieved October 19, 2007, from http://www. ascilite.org.au/conferences/perth04/procs/hisham.html Hislop, G. W. (1999). Anytime, anyplace learning in an online graduate professional degree program. Group Decision and Negotiation, 8, 385-390. Hofmann, T. (2003). Collaborative filtering via Gaussian probabilistic latent semantic analysis. In 26th ACM SIGIR Conference on Research in Information Retrieval (pp. 259-266). Hollands, N. (2000). Online testing: Best practices from the field. Retrieved October 30, 2007, from http://198.85.71.76/english/blackboard/testingadvice. html Holmes, G., & McElwee, G. (1995). Total quality management in higher education how to approach human resource management. Total Quality Management, 7(6), 5. Hope, A. (2001) Quality assurance. In G. Farrell (Ed.), The changing faces of virtual education (pp. 125-140). London: The Commonwealth of Learning.
Compilation of References
Hudson, B. (2005). Conditions for achieving communication, interaction and collaboration in e-learning environments. E-Learningeuropa.info. Retrieved October 18, 2007, from http://www.e-Learningeuropa. info/index.php?page=doc&doc_id=6494&doclng=7& menuzone=1 Hunt, L. M., Thomas, M. J. W., & Eagle, L. (2002). Student resistance to ICT in education. In R. Kinshuk, K. Lewis, R. Akahori, T. Kemp, L. Okamoto, C. Henderson & H. Lee (Eds.), Proceedings of the 9th International Conference on Computers in Education (Vol. 2, pp. 964-968), Auckland, New Zealand: Asia-Pacific Society for Computers in Education. HUSAT. (1990). The HUFIT planning, analysis and specification toolset. Loughborough: HUSAT Research Institute, Loughborough University. Husson, A. (2004). Comparing quality models adequacy to the needs of clients in e-learning. In Quality in European E-Learning Designing Tools and Frameworks for Tomorrows Quality Development, Workshop European Quality Observatory (EQO) co-located to the 4th IEEE International Conference on Advanced Learning Technologies. Joensuu: European Quality Observatory. Hutchins, T., Sims, M., & Cooper, T. (1999). Developing a portfolio which demonstrates competencies. In M. Sims & T. Hutchins (Eds.), Learning materials: Certificate in children’s services; 0-6 years (bilingual support) (pp. 3-29). Perth, WA: Ethnic Childcare Resource Inc. Western Australia. Hyde, P., Booth R., & Wilson, P. (2003). The development of quality online assessment in VET. In H. Guthrie (Ed.), Online learning: Research readings (pp. 87-106). Leabrook, South Australia: NCVER. Hyland, B. (2002). Cone of learning. From the course “Train the trainer”. Iowa Center for Public Health Preparedness. Retrieved October 26, 2007, from http:// www.public-health.uiowa.edu/icphp/ed_training/ttt/archive/2002/2002_course_materials/Cone_of_Learning. pdf
Hyvärinen, A., & Oja, E. (1998). A fast fixed-point algorithm for independent component analysis. Neural Computation, 9(7), 1483-1492. Hyvärinen, A., Karhunen, J., & Oja, E. (2001). Independent Component Analysis. New York: John Wiley & Sons. Ibabe, I., & Jauregizar, J. (2005). Ejercicios de autoevaluación con Hot Potatoes. In I. Ibabe & J. Jauregizar (Eds.), Cómo crear una web docente de calidad (pp. 65-100). A Coruña, Spain: Netbiblo. Ibabe, I., Gómez J., & Jauregizar, J. (2006). Aplicación de pruebas de auto-evaluación interactivas para potenciar el trabajo autónomo de los estudiantes conforme al sistema ECTS. In J. Guisasola & T. Nuño (Eds.), La educación universitaria en tiempos de cambio (pp. 63-74). San Sebastián, Spain: Universidad del País Vasco. IITT. (2005). Institute of IT training. Retrieved October 30, 2007, from http://www.iitt.org.uk Ilghami, O., & Nau, D.S. (2003). A general approach to synthesize problem-specific planners (Tech. Rep. CS-TR-4597). University of Maryland: Department of Computer Science. IMS CP. (2006). Retrieved October 22, 2007, from http://www.imsglobal.org/content/packaging/ IMS LD, IMS Learning Design. (2006). IMS Global Learning Consortium. Retrieved October 22, 2007, from http://www.imsglobal.org/learningdesign/index.html IMSSS, IMS Simple Sequencing. (2006). Retrieved October 22, 2007, from http://www.imsglobal.org/simplesequencing/index.html Inglis, A. (1999). Is online delivery less costly than print and is it meaningful to ask? Distance Education, 20(2), 220-232. Institute for Higher Education Policy. (2000). Quality on the line. Retrieved October 19, 2007, from http://www. ihep.com/Pubs/PDF/Quality.pdf/ Intelligent Web Teacher. (2006). Retrieved October 21, 2007, from http://www.momanet.it/english/iwt_eng. html
Compilation of References
ISO. (2003). ISO standards compendium ISO 9000–quality management (10th ed.). Geneva: International Organization for Standardization. ISO. (2005). ISO/IEC standard benchmarks quality of elearning. International Standard Organization. Retrieved October 30, 2007, from http://www.iso.org/iso/en/commcentre/pressreleases/2006/ref992.html ITC. (2003). Online assessment techniques. Retrieved October 29, 2007, from http://web.utk.edu/~dsuppach/ indep/assessment2.htm. Jaeger, W. (1945). Paideia: The ideals of Greek culture (G. Highet, Trans.). New York: Oxford University Press. James, R., McInnis, C., & Devlin, M. (2002). Assessing learning in Australian universities. Canberra, Australia: Center for the Study of Higher Education, The University of Melbourne & The Australian Universities Teaching Committee.
ton Centre for Improving the Quality of Undergraduate Education. Joint Committee on Standards for Educational Evaluation. (1988). The personnel evaluation standards. Thousand Oaks, CA: Corwin Press. Jonassen, D. H., & Grabowski, B. L. (1993). Handbook of individual differences, learning and instruction. Erlbaum, Hillsdale. Jonassen, D.H., & Grabowski, B.L. (1993). Handbook of individual differences, learning, and instruction. Hillsdale, NJ: Lawrence Erlbaum Associates. Jones, E. R., (2002). Implications of SCORM™ and emerging e-learning standards on engineering education. In ASEE Gulf-Southwest Annual Conference (pp. 20-22). Juran, J. M. (1988). Quality control handbook. New York: McGraw Hill.
Jennings, D. (2005). E-learning 2.0, whatever that is. Retrieved October 18, 2007, from http://alchemi.co.uk/ archives/ele/e-Learning_20_wh.html
Juran, J.M., & Gyrna, F.M. Jr. (1988). Juran’s quality control handbook. New York: McGraw-Hill.
Johnes, J., & Taylor, J. (1990). Performance indicators in higher education. Buckingham: Open University Press.
Karrer, T. (2006, February 10). What is e-learning 2.0. E-Learning Technology. Retrieved October 18, 2007, from http://e-Learningtech.blogspot.com/2006/02/whatis-e-Learning-20.html
Johnson, D. W., & Johnson, R. T. (1987). Learning together and alone: Cooperative, competitive, and individualistic. Englewood Cliffs, NJ: Prentice Hall. Johnson, D., & Johnson, R. (1999). Learning together and alone: Cooperative, competitive, and individualistic learning. Boston: Allyn and Bacon. Johnson, D., Johnson, R., & Holubec, E. (1998). Cooperation in the classroom. Boston: Allyn and Bacon. Johnson, R., & Johnson, D. W. (1998). Cooperative learning. Two heads learn better than one. Transforming Education, 18, 34. Johnson-Bogart, K. (1995). Writing portfolios: What teachers learn from students self-assessment. In Washington Centre’s Evaluation Committee (Ed.), Assessment in and of collaborative learning. Washington: Washing-
Kasprisin, C. A., Single, P. B., Single, R. M., & Muller, C. B. (2003). Building a better bridge: Testing e-training to improve e-mentoring programmes in higher education. Mentoring and Tutoring, 11(1). Kassabova, D., & Trounson, R. (2000). Applying soft systems methodology for user centred design. In Proceedings of the NACCQ 2000 (pp. 159-165). Wellington. Kauffman, S. (1996). At home in the universe: The search for the laws of self-organization and complexity. Oxford University Press. Kearsley, G. (2000). Online education: Learning and teaching in cyberspace. Belmont, CA: Wadsworth. Kendle, A., & Northcote, M. (2000). The struggle for balance in the use of quantitative and qualitative online assessment tasks. In Proceedings ASCILITE 2000, Coffs
Compilation of References
Harbour. Retrieved October 29, 2007, from http://www. ascilite.org.au/conferences/coffs00/papers/amanda_kendle.pdf
Klenowski, V. (2002). Developing portfolios for learning and assessment: Processes and principles. London: RoutledgeFalmer.
Kennedy, G.A. (1963). The art of persuasion in Greece. Princeton, NJ: Princeton University Press.
Klenowski, V. (2002). Developing portfolios for learning and assessment: Processes and principles. London: RoutledgeFalmer.
Kennedy, J. K., Sang, J. C. K, Wai-ming, F. Y., & Fok, P. K. (2006, May). Assessment for productive learning: Forms of assessment and their potential for enhancing nd learning. Paper presented at the 32 Annual Conference of the International Association for Educational Assessment, Singapore. Kennedy, R., & Eberhart, R. (2001). Swarm intelligence. CA: Morgan Kaufmann/Academic Press. Kerlin, C.A. (2000). Measuring student satisfaction with the service processes of selected student educational support services at Everett Community College. Published Dissertation AAT9961458. Oregon State University. Kerr, M. S., & Rynearson, R. (2006). Student characteristics for online learning success. Internet and Higher Education, 9, 91-105. Kickul, G., & Kickul, J. (2006). Closing the gap: Impact of student proactivity and learning goal orientation on e-learning outcomes. International Journal on E-Learning, 5(3), 361-372. Kieran, C., Forman, E., & Sfard, A. (2001). Learning discourse: Sociocultural approaches to research in mathematics education. Educational Studies in Mathematics, 46, 1-12. Kim, S., & Sonnenwald, D. H. (2002). Investigating the relationship between learning style preferences and teaching collaboration skills and technology: An exploratory study. In E. Toms (Ed.), Proceedings of the American Society of Information Science & Technology Annual Conference (pp. 64-73). Medford, NJ: Information Today. Kimber, K., Pillay, H., & Richards, C. (2007). Technoliteracy and learning: An analysis of the quality of knowledge in electronic representations of understanding. Computers & Education, 48(1), 59-79.
Klenowski, V., Askew, S., & Carnell, E. (2006). Portfolios for learning, assessment and professional development. Assessment and Evaluation in Higher Education, 31(3), 267-286. Knight, M. E. (1994). Portfolio assessment: Application of portfolio analysis. Lanham, MD: University Press of America. Knight, P.T. (2001). Complexity and curriculum: A process approach to curriculum-making. Teaching in Higher Education, 6(3), 369-381. Knight, P.T., & Trowler, P. (2001). Departmental leadership in higher education. Buckingham: Society for Research in Higher Education & Open University Press. Koch, J.V., & Fisher, J.L. (1998). Higher education and total quality management. Total Quality Management, 9(8), 659-668. Kolisch, R., & Hartmann, S. (1999). Heuristic algorithms for solving the resource-constrained project scheduling problem: Classification and computational analysis. Project scheduling: Recent Models, Algorithms and Applications, 147-178. Koper, R. (2005). Designing learning networks for lifelong learners. In R. Koper & C. Tattersall (Eds.), Learning design: A handbook on modelling and delivering networked education and training (pp. 239-252). Koper, R., & Olivier, B. (2004). Representing the learning design of units of learning. Educational Technology & Society, 7(3), 97-111. Kotsiantis, S.B., Pierrakeas, C.J., & Pintelas, P.E. (2003). Preventing student dropout in distance learning using machine learning techniques. In Proceedings of 7th International Conference on Knowledge-Base Intelligent Information an Engineering Systems.
Compilation of References
Kozma, R. (1991). Learning with media. Review of Educational Research, 61(2), 179-212. Kram, K. E. (1985). Mentoring at work: Developmental relationships in organizational life. New York: University Press of America. Kristofic, A., & Bielikova, M. (2005). Improving adaptation in Web-based educational hypermedia by means of knowledge discovery. In ACM Conference on Hypertext and Hypermedia (pp. 184-192). Kulhavy, R. W., & Stock, W. A. (1989). Feedback in written instruction: The place of response certitude. Educational Psychology Review, 1(4), 279-308. Lafuente, E., & Hunt, T. (2007) Development: XMLDB documentation. Retrieved October 28, 2007, from http:// docs.moodle.org/en/Development:XMLDB_Documentation Lambert, M. P. (1996). The distance education and training council: At the cutting edge. Quality Assurance in Education, 4(4), 26-28. LAMS. (2006). Learning Activity Management System. Retrieved October 22, 2007, from http://lamsfoundation. org/ Larsen, J., Hansen, L.K., Szymkowiak, A., Christiansen, T., & Kolenda, T. (2002). Web mining: Learning from the world wide Web (Special Issue of Computational Statistics and Data Analysis). Computational Statistics and Data Analysis, 38, 517-532. Laurillard, D. (1993). Rethinking University Teaching: a framework for the effective use of educational technology. London: Routledge. Laurillard, D. (1999). A conversational framework for individual learning applied to the ‘learning organisation’ and the ‘learning society’, systems research and behavioral science (vol. 16, pp. 113-122). Laurillard, D. (2002). Rethinking university teaching: A conversational framework for the effective use of learning technologies (2nd ed.). London: Routledge.
Lee, G., & Weerakoon, P. (2001). The role of computeraided assessment in health professional education: A comparison of student performance in computer-based and paper-and-pen tests. Medical Teacher, 23, 152-157. Lee, T., Girolami, M., & Sejnowski, T. (1999). Independent component analysis using an extend infomax algorithm for mixed sub-Gaussian and super-Gaussian sources. Neural Computation, 11, 417-441. Lee, Y. L., & Nguyen, H. (2005). So are you online yet?! Distance and online education today. In M. Khosrow-Pour (Ed.), Managing modern organizations with information technology. Proceedings of the 2005 Information Resources Management Association International Conference (pp. 1035-1036). San Diego, CA: Information Resource Management Association. Leidner, D., & Jarvenpaa, S. (1995). The use of information technology to enhance management school education: A theoretical view. MIS Quarterly, 19(3), 265-291. Leonard, J., & Guha, S. (2001). Students’ perspectives on distance learning. Journal of Research on Technology in Education, 34(1). Levy, Y. (2007). Comparing dropouts and persistence in e-learning courses. Computers & Education, 48(2), 185-204. Liaw, S. S. (2002). Understanding user perceptions of WWW environments. Journal of Computer Assisted Learning, 18, 1-12. Liaw, S., Chen, G., & Huang, H. (in press). Users’ attitudes toward Web-based collaborative learning systems for knowledge management. Computers & Education. LIfIA & EIfEL. (2004). Open e-quality learning standards. Joint e-quality committee of LIfIA (Learning Innovations Forum d’Innovation d’Apprentissage) and EIfEL (European Institute for e-Learning). Linn, R. L., Baker, E. L., & Dunbar, S. B. (1991). Complex, performance-based assessment: Expectations and validation criteria. Educational Researcher, 20(8), 15-21.
Compilation of References
Little, J.W. (1990). The mentor phenomenon and the social organisation of teaching. In C. B. Courtney (Ed.), Review of Research in Education, 16, 297-351. Washington, DC: American Educational Research Association. Liu, H., & Yang, M. (2005). QoL guaranteed adaptation and personalization in e-learning systems. IEEE Transactions on Education, 48(4), 676-687. Lizzio, A., Wilson, K., & Simons, R. (2002).University students’ perceptions of the learning environment and academic outcomes: Implications for theory and practice. Studies in Higher Education, 27(1), 27-52. LOM. (2006). Retrieved October 22, 2007, from http:// ltsc.ieee.org/wg12/ Love, T., & Cooper, T. (2004). Designing online information systems for portfolio-based assessment: Design criteria and heuristics. Journal of Information Technology Education, 3, 65-81. Lowry, R. (2005). Computer-aided self-assessment. An effective tool. Chemistry Education Research and Practice, 6(4), 198-203. LTSC. (2006). Retrieved October 22, 2007, from http:// ieeeltsc.org/ M’tir, R.H., Jeribi, I., Rumpler, B., & Ghazala, H.H.B. (2004). Reuse and cooperation in e-learning systems. In Proceedings of the Fifth International Conference on Information Technology Based Higher Education and Training, ITHET (pp. 131-137). Ma, Y., Liu, B., Wong, C.K., Yu, P.S., & Lee, S.M. (2000). Targeting the right students using data mining. In KDD’00: Sixth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (pp. 457-464). Macdonald, J. (2004). Developing competent e-learners: The role of assessment. Assessment & Evaluation in Higher Education, 29(2), 215-226. Maimon, O., & Rokach, L. (2005). Data mining and knowledge discovery handbook (1st ed.). Springer. Maragliano, R. (2000). Nuovo manuale di didattica multimediale. Editori Laterza.
Margerum-leys, J., & Bass, K.M. (2006). Electronic tools for online assessment: An illustrative case study from teacher education. In S.L. Howell & M. Hricko (Eds.), Online assessment and measurement. Case studies from higher education, K-12 and corporate (pp. 62-81). London: Information Science Publishing. Marques, C. (2004). E-learning: Uma nova forma de aprender. Revista e-Ciência, 1(1), 23. Marqués, P. (2001). Didáctica. Los Procesos de Enseñanza y Aprendizaje. La motivación. Retrieved October 26, 2007, from http://dewey.uab.es/pmarques/actodid. htm Martínez, A., Gómez, E., Dimitriadis, Y., Jorrín, I. M., Rubia, B., & Vega, G. (2005). Multiple case studies to enhance project-based learning in a computer architecture course. IEEE Transactions on Education, 48(3), 482-489. Marton, F., & Ramsden, P. (1988). What does it take to improve learning? In P. Ramsden (Ed.), Improving learning: New perspectives. London: Kogan Page. Masie, E. (2006). Nano-learning: Miniaturization of design. Chief Learning Officer (CLO) Magazine. Retrieved October 25, 2007, from http://www.clomedia. com/content/templates/clo_article.asp?articleid=1221 &zoneid=173 Mason, B. J., & Bruning, R. (2000). Providing feedback in computer-based instruction: What the research tells us. Retrieved October 29, 2007, from http://dwb.unl. edu/Edit/MB/MasonBruning.html Maurer, W. (2004). Estándares e-learning. SEESCYT. Retrieved October 28, 2007, from http://fgsnet.nova. edu/cread2/pdf/Maurer1.pdf Mazza, R., & Dimitrova, V. (2003, July 20-24). CourseVis: Externalising student information to facilitate instructors in distance learning. In U. Hoppe, F. Verdejo & J. Kay (Eds.), Proceedings of the International conference in Artificial Intelligence in Education, Sydney, Australia. Mazzolini, M., & Maddison, S. (2003). Sage, guide or ghost? The effects of instructor intervention on student
Compilation of References
participation in online discussion forums. Computers and Education, 40, 237-253. Mcalpine, M (2002). Principles of assessment. Luton: CAA Centre. McCarthy, J. P., & Anderson, L. (2000). Active learning techniques vs. traditional teaching styles: Two experiments from history and political science. Innovative Higher Education, 24(4), 279-294. McConnell, D. (2002). The experience of collaborative assessment in e-learning. Studies in Continuing Education, 24(1), 73-92. McCormick, N., & Leonard, J. (1996). Gender and sexuality in the cyberspace frontier. Women & Therapy, 19, 109-119. McCulloch, M. (1993). Total quality management: Its relevance for higher education. Quality Assurance, 1(2), 5-11. McCune, V. (2003). Promoting high-quality learning: Perspectives from the ETL project. In Proceedings: 14th Conference on University and College Pedagogy. Fredrikstad: Norwegian Network in Higher Education. McDonald, J., & Mcateer, E. (2003). New approaches to supporting students: Strategies for blended learning in distance and campus based environments. Journal of Educational Media, 28(2-3), 129-146. McKnight, R., & Demers, N. (2002). Evaluating course Web site utilization by students using Web tracking software: A constructivist approach. In Proceedings of the Technology, Colleges and Community Worldwide Online Conference 2002, Kapio’lani, Hi: University of Hawaii. Retrieved October 19, 2007, http://kolea.kcc. hawaii.edu/tcc/tcon02/presentations/mcknight.html McLean, M. (2004). Does the curriculum matter in peer mentoring? From mentee to mentor in problem-based learning: A unique case study. Mentoring and Tutoring, 12(2), 173-186. McLoughlin, C., & Luca, J. (2001). Quality in online delivery: What does it mean for assessment in e-learning environments? In G. Kennedy, M. Keppell, C. McNaught
0
& T. Petrovic (Eds.), Meeting at the crossroads. Proceedings of the 18th Annual Conference of the Australasian Society for Computers in Learning in Tertiary Education (pp. 417-426). Melbourne, Australia: Australasian Society for Computers in Learning in Tertiary Education. Retrieved October 19, 2007, from http://www.ascilite. org.au/conferences/melbourne01/pdf/papers/mcloughlinc2.pdf McMahon, M., Limerick, B., & Gillies, J. (2002). Structured mentoring: A career transition support service for girls. Australian Journal of Career Development, 11(2), 7-12. McPherson, K. (2006). Wikis and student writing. ProQuest Information and Learning. Teacher Librarian. Retrieved October 25, 2007, from http://redorbit. com/news/education/761377/wikis_and_student_writing/index.html Mehlenbacher, B., Miller, C. R., Covington, D., & Larsen, J. S. (2000). Active and interactive learning online: A comparison of Web-based and conventional writing classes. IEEE Transactions on Professional Communication, 43(2), 166-184. MENON Network EEIG. (2004). Sustainable environment for the evaluation of quality in e-learning. SEEQUEL core quality framework. E-Learning Initiative, European Commission. MENTOR. (2001). US quality standards for e-mentoring: Elements of effective practice for e-mentoring. E-Mentoring Clearinghouse. Retrieved October, 2005, from www.mentoring.org/emc Merceron, A., & Yacef, K. (2003). A Web-based tutoring tool with mining facilities to improve learning and teaching. In 11th International Conference on Artificial Intelligence in Education (pp. 41-52). Mergen, E., Grant, D., & Widrick, S.M. (2000). Quality management applied to higher education. Total Quality Management, 11(3), 345-352. Mertens, R., Farzan, R., & Brusilovsky, P. (2006). Social navigation in Web lectures. In U. K. Wiil, P. J. Nürnberg & J. Rubart (Eds.), Proceedings of Hypertext Conference 2006.
Compilation of References
Metros, S.E., & Bennett, K. (2002). Learning objects in higher education. Educause Research Bulletin, 19, 2. Retrieved October 16, 2007, from www.educause. edu/ir/library/pdf/ERB0219.pdf Meyen, E.L., Aust, R., Gauch, J.M., Hinton, H.S., Isaacson, R.E., Smith, S.J., et al. (2002 ). E-learning: A programmatic research construct for the future. Journal of Special Education Technology, 17(3), 37-46. Michalsky, R.S., & Stepp, R.E. (1983). Automated construction of classifications: Conceptual clustering versus numerical taxonomy. IEEE Transactions on Pattern Analysis and Machine Intelligence, 5(4), 396-410. Mickle, M. H., Shuman, L., & Spring, M. (2004). Active learning courses on the cutting edge of technology. In Proceedings of the 34th Annual Conference Frontiers in Education (FIE’04) (Vol. 1, pp. T2F/19 – T2F/23). Piscataway, NJ: IEEE. Miller, A. (2004). E-mentoring: An overview. Paper presented at the First Aimhigher Networking Meeting, Aston University. Miller, C.M.I., & Parlett, M. (1974). Up to the mark: A study of the examination game. Guildford: Society for Research into Higher Education. Minaei, B., Kashy, D.A., Kortemeyer, G., & Punch, W. (2003). Predicting student performance: An application of data mining methods with an educational Web-based system. In Proceedings of 33rd Frontiers in Education Conference. Mishra, S. (2002). A design framework for online learning environments. British Journal of Educational Technology, 33(4), 493-496. Mitra, A., & Steffensmeier, T. (2000). Change in student attitudes and student computer use in a computer-enriched environment. Journal of Research on Computing in Education, 32(3), 417-431. Miyahara, K., & Pazzani, M. (2000). Collaborative filtering with the simple Bayesian classifier. In Pacific Rim International Conference on Artificial Intelligence (pp. 679-689).
Monk, A., & Howard, S. (1998, March-April). The rich picture: A tool for reasoning about work context. Interactions, 21-30. Moodle. (2006). Retrieved October 21, 2007, from http://moodle.org/doc/ Moodle. (2006). Retrieved October 22, 2007, from http://demo.moodle.com/ Moodle. (2007). Philosophy. Retrieved October 17, 2007, from http://docs.moodle.org/en/Philosophy Moore, G. R. (1991). Computer to computer: Mentoring possibilities. Educational Leardership, 49(3), 40. Mor, E., & Minguillón, J. (2004). E-learning personalization based on itineraries and long-term navigational behavior. In Thirteenth World Web Conference (pp. 264-265). Morley, R. (1996). Painting trucks at general motors: The effectiveness of a complexity-based approach. In Ernst and Young Center for Business Innovation, (Ed.), Embracing Complexity: Exploring the Application of Complex Adaptive Systems to Business (pp. 53-58). Cambridge, MA. Morozov, M., Tanakov, A., Gerasimov, A., Bystrov, D., & Cvirco, E. (2004). Virtual chemistry laboratory for school education. In Proceedings of the IEEE International Conference on Advanced Learning Technologies (ICALT’04) (pp. 605-608). IEEE Computer Society. Mortera-Gutierrez, F. (2006). Faculty best practices using blended learning in e-learning and in face-to-face instruction. International Journal on E-Learning, 5(3), 313-337. Murray, T. (1999). Authoring intelligent tutoring systems: An analysis for the state of the art. International Journal of Artificial Intelligence in Education, 10, 98-129. Muscettola, N., Dorais, G.A., Fry, C., Levinson, R., & Plaunt, C. (2002). IDEA: Planning at the core of autonomous reactive agents. In Proceedings of the Workshop Online Planning and Scheduling, AIPS 2002 (pp. 49-55). Toulouse, France.
Compilation of References
Myers, I.B., McCaulley, M.H., Quenk, N.I., & Hammer, A.L. (1999). MBTI manual: A guide to the development and use of the Myers-Briggs Type Indicator. Paolo Alto, CA: Consulting Psychologist Press. Nachmias, R. (2002). A research framework for the study of a campus-wide Web-based academic instruction project. Internet and Higher Education, 5(3), 213-229. NADE. (2001). Quality standards for distance education. Norwegian Association for Distance and Flexible Education. Retrieved October 30, 2007, from http://www. nade-nff.no/nff2/filer/Kvalitet/Kvalitetsnormer%20for %20fjernundervisning.pdf Ndubisi, N. O. (2006). Factors of online learning adoption: A comparative juxtaposition of the theory of planned behaviour and the technology acceptance model. International Journal on E-Learning, 5(4), 571-591. Neal, L. (2006, January 19). Predictions for 2006: Elearning experts map the road ahead. eLearn Magazine. Retrieved October 19, 2007, from http://www.elearnmag. org/subpage.cfm?section=articles&article=31-1 Ngai, E., & Poon, J. (2007). Empirical examination of the adoption of WebCT using TAM. Computers and Education, 42(2), 250-267. Noe, R. A. (1998). An investigation of the determinants of successful assigned mentoring relationships. Personnel Psychology, 41, 457-479. Nowack, K. (1996). Is the Myers Briggs Type Indicator the right tool to use? Performance in Practice, 6. O’Donovan, B., Price, M., & Rust, C. (2004). Know what I mean? Enhancing student understanding of assessment standars and criteria. Teaching in Higher Education, 9(3), 325-336.
O’Neill, K. (2004). Building social capital in a knowledge-building community: Telementoring as a catalyst. Interactive Learning Environments, 12(3), 179-208. O’Neill, K., & Harris, J. (2005). Bridging the perspectives and developmental needs of all participants in curriculum-based telementoring programs. Journal of Research on Technology in Education, 37(2), 111-128. O’Neill, K., Harris, J., Cravens, J., & Neils, D. (2002). Perspectives on e-mentoring: A virtual panel holds an online dialogue. National Mentoring Center Newsletter, 9, 5-12. O’Regan. (2006). MOLIE: Mentoring online in Europe. Salford: University of Salford. O’Reilly, T. (2005). Web 2.0: Compact definition? Retrieved October 18, 2007, from http://radar.oreilly.com/ archives/2005/10/web_20_compact_definition.html O’Reilly, T. (2005). What Is Web 2.0. design patterns and business models for the next generation of software. O’Reilly Web. Retrieved October 25, 2007, from http:// www.oreillynet.com/pub/a/oreilly/tim/news/2005/09/30/ what-is-web-20.html ODLQC. (2006). Open and distance learning quality council standards. Open and Distance Learning Quality Council. Retrieved October 30, 2007, from http://www. odlqc.org.uk/standard.doc OECD. (2005). Formative assessment: Improving learning in secondary classrooms. Paris: OECD. Oliver, M., & Shaw, G. P. (2003). Asynchronous discussion in support of medical education. Journal of Asynchronous Learning Networks, 7(1), 56-67.
O’Hear, S. (2005). Seconds out, round two. The Guardian. Retrieved October 25, 2007, from http://education.guardian.co.uk/elearning/story/0,10577,1642281,00.html
Oliver, M., & Trigwell, K. (2005). Can blended learning be redeemed? E-Learning, 2(1). Retrieved October 18, 2007, from http://www.wwwords.co.uk/pdf/viewpdf. asp?j=elea&vol=2&issue=1&year=2005&article=3_Oliver_ELEA_2_1_web&id=83.104.158.140
O’Hear, S. (2006). E-learning 2.0 - how Web technologies are shaping education. In R. MacManus (Ed.). Retrieved October 25, 2007, from http://www.readwriteweb.com/ archives/e-learning_20.php
Omalley, J., & McCraw, H. (1999). Student perceptions of distance learning, online learning and the traditional classroom. Online Journal of Distance Learning Administration, 2(4), 1-16.
Compilation of References
ONeill, K., & Harris, J. (2000, April 24-28). Is everybody happy? Bridging the perspectives and perspectives and developmental needs of participants in telementoring programs. Paper presented at the Annual Meeting of the American Educational Research Association, New Orleans. LA. Onions, C.T. (Ed.). (1983). The shorter Oxford English dictionary (3rd ed.). Oxford: Oxford University Press. Orlikowski, W. J. (2000). Using technology and constituting structure: A practice lens for studying technology in organizations. Organizational Science, 11(4), 404-428. Orsini, J.N. (2000). Profound education. Total Quality Management, 11(4-6), 762-766. Ortigosa, A., & Carro, R. (2003). The continuous empirical evaluation approach: Evaluating adaptive Web-based courses. User modeling. Lecture Notes in Computer Science, 2702, 163-167. Owlia, M.S., & Aspinwall, E.M. (1996) A framework for the dimensions of quality in higher education. Total Quality Management, 7(2). Paloff, R.M., & Pratt, K. (1999). Building communities in cyberspace: Effective strategies for the online classroom. San Francisco: Jossey-Bass. Paredes, P., & Rodríguez, P. (2002). Considering sensing-intuitive dimension to exposition-exemplification in adaptive sequencing. In P. De Bra, P. Brusilovsky & R. Conejo (Eds.), Adaptive hypermedia and adaptive Web-based systems. Lecture Notes in Computer Science, 2347, 556-559. Parshall, C. G., Davey, T., & Pashley, P. J. (2000). Innovative item types for computerized testing. In W. Van der Linden & C. A. W. Glas (Eds.), Computerized adaptive testing: Theory and practice (pp. 129-148). Norwell, MA: Kluwer Academic Publisher. Patel, N.V. (1995). Application of soft systems methodology to the real world process of teaching and learning. International Journal of Educational Management, 9(1), 13-23.
Pearce, J. (2006) Using wiki in education. The Science of Spectroscopy. Retrieved October 25, 2007, from http://www.scienceofspectroscopy.info/edit/index. php?title=Using_wiki_in_education Pejtersen, A.M. (1989). The BOOKHOUSE: An icon based database system for fiction retrieval in public libraries. In Proceedings of 7th Nordic Information and Documentation Conference, Århus, Denmark. Pennock, D., Horvitz, E., Lawrence, S., & Lee Giles, C. (2000). Collaborative filtering by personality diagnosis: A hybrid memory- and model-based approach. In 16th Conference on Uncertainty in Artificial Intelligence (pp. 481-488). Penuel, B., & Roschelle, J. (1999). Designing learning: Cognitive science principles for the innovative organization. Stanford Research Institute International, 1-26. Pérez Juste, R. (2006). Evaluación de programas educativos. Madrid: La Muralla. Pérez Juste, R., López, F., Peralta, M.D., & Municio, P. (2004). Hacia una educación de calidad. Gestión, instrumentos y evaluación. Madrid: Narcea. Perren, L. (2003). The role of e-mentoring in entrepreneurial education and support: A meta-review of academic literature. Education and Training, 45(8-9), 517-525. Perrin Glorian, M. J. (1994). Théorie des situations didactiques: Naissance, développement, perspectives. In M. Artigue, R. Gras, C. Laborde & P. Tavignot (Eds.), Vingt ans de didactique des mathématiques en France (pp. 97-147). Paris: La Pensée Sauvage. Peterson, E. R., Deary, I. J., & Austin, E. J. (2003). The reliability of Riding’s Cognitive Style Analysis test. Personality and Individual Differences, 34, 881-891. Petrova, K. (2001).Teaching differently: A hybrid delivery model. In N. Delener & C. N. Chao (Eds.), Proceedings of the 2001 Global Business and Technology Association International Conference (pp. 717-727). Istanbul, Turkey: Global Business and Technology Association.
Compilation of References
Petrova, K. (2002). Course design for flexible learning. New Zealand Journal of Applied Computing and Information Technology, 6(1), 45-50. Petrova, K. (2007). Mobile learning as a mobile business application. International Journal of Innovation in Learning, 4(1), 1-13. Petrova, K., & Sinclair, R. (2005). Business undergraduates learning online: A one semester snapshot. International Journal of Education and Development using Information and Communication Technology, 1(4), 69-88. Retrieved October 19, 2007, from http://ijedict. dec.uwi.edu/viewissue.php?id=6
Pillay, H. (1998). An investigation of the effect of individual cognitive preferences on learning through computer-based instruction. Educational Psychology, 18(2), 171-182. Pils, C., Roussaki, L., & Strimpakou, M. (2006). Location-based context retrieval and filtering. Lecture Notes in Computer Science, 3987, 256-273. Pintrich, P. R., & Schrauben, B. (1992). Students’ motivational beliefs and their cognitive engagement in classroom academic tasks. In D. H. Schunk & J. L. Meece (Eds.), Student perceptions in the classroom. Hillslade, NJ: Lawrence Erlbaum.
Pettigrew, A. (1990). Longitudinal field research on change: Theory and practice. Organization Science, 1(3), 267-291.
Piramuthu, S. (2005). Knowledge-based Web-enabled agents and intelligent tutoring systems. IEEE Transactions on Education, 48(4), 750-756.
Phillimore, R. (2002). Face to face lectures or econtent: Student and staff perspectives. In R. Kinshuk, K. Lewis, R. Akahori, T. Kemp, L. Okamoto, C. Henderson & H. Lee (Eds.), Proceedings of the 9th International Conference on Computers in Education (Vol. 1, pp. 211-212). Auckland, New Zealand: Asia-pacific Society for Computers in Education.
Pituchs, K.A., & Lee, Y.-K. (2006). The influence of system characteristics on e-learning use. Computers & Education, 47(2), 222-244.
Philpot, T. A., Hall, R. H., Hubing, N., & Flori, R. E. (2005). Using games to teach statics calculation procedures: Application and assessment. Computer Applications in Engineering Education, 13(3), 222-232. Phipps, R., & Merisotis, J. (1999). What’s the difference? A review of contemporary research on the effectiveness of distance learning in higher education. Washington, DC: The Institute for Higher Education Policy. Phipps, R., & Merisotis, J. (2000). Quality on the line: Benchmarks for success in Internet-based distance education. Washington: IHEP - Institute for Higher Education Policy. Picciano, A. (2002). Beyond student perceptions: Issues of interaction, presence, and performance in online course. Journal of Asynchronous Learning Networks, 6(1), 21-40.
Plous, S. (2000). Tips on creating and maintaining an educational World Wide Web site. Teaching of Psychology, 27, 63-70. Polo, F. (2006). Marketing 2.0 New way to old things. Jornada Internet de Nueva Generación: Web 2.0, Internet 2.0. Spain. Retrieved October 25, 2007, from http://internetng.dit.upm.es/ponencias-jing/2006/polo.pdf Ponzurick, T. G., France, K.R., & Logar, C.M. (2000). Delivering graduate marketing education: An analysis of face-to-face versus distance education. Journal of Marketing Education, 22(3), 180-187. Poole, D. M. (2000). Student participation in a discussionoriented online course: A case study. Journal of Research on Computing in Education, 33(2), 162-177. Popham, W.J. (1983). Problemas y técnicas de evaluación educativa. Madrid: Anaya Potelle, H., & Rouet, J. F. (2003). Effects of content representation and readers’ prior knowledge on the comprehension of hypertext. International Journal of Human-Computer Studies, 58, 327-345.
Compilation of References
Preece, J., Rogers, Y., & Sharp, H. (2002). Interaction design: Beyond human computer interaction. Wiley. Prensky, M. (2001). Digital natives digital immigrants. On the Horizon NCB University Press, 9(5). Prinz, W. (2006). Social Web applications. Jornada Internet de Nueva Generación: Web 2.0, Internet 2.0. Spain. Retrieved October 25, 2007, from http://internetng.dit. upm.es/ponencias-jing/2006/prinz.pdf Puntambekar, S. (2006). Analyzing collaborative interactions: Divergence, shared understanding and construction of knowledge. Computers & Education, 47(3), 332-351. QAA. (2004). Code of practice for the assurance of academic quality and standards in higher education (2nd ed.). Mansfield: Quality Assurance Agency for Higher Education. Qual E-Learning Project Consortium. (2003). Qual-elearning project. Qual E-Learning Project Consortium. Retrieved October 30, 2007, from http://www.qual-eLearning.net/cgi/index.php?wpage=overview Qual E-Learning Project Consortium. (2004). Handbook of best practices for the evaluation of e-learning effectiveness. Qual e-Learning Project Consortium. Qual E-Learning Project Consortium. (2007) Qual e-learning evaluation tool. Qual-e-Learning Project. Retrieved October 30, 2007, from http://www.qual-eLearning.net/cgi/index.php Quality Assurance Agency for Higher Education. (2002). Distance learning guidelines. Retrieved October 19, 2007, from http://www.qaa.ac.uk/public/dlg/dlg_textonly.htm Quinlan, R.J (1992). C4.5: Programs form machine learning. San Mateo, CA: Morgan Kaufmann. Quinn, D., & Reid, I. (2003). Using innovative online quizzes to assist learning. Retrieved October 29, 2007, from http://ausweb.scu.edu.au/aw03/papers/quinn/paper. html
Recommendation of the European Parliament and the Council of 18 December 2006 on Key Competences for Lifelong Learning. (2006, December 12). Official Journal of the European Union Retrieved October 18, 2007, from http://www.cimo.fi/dman/Document.phx/~public/ Sokrates/Comenius/keycompetences06.pdf Reeves, T. C. (2000). Alternative assessment approaches for online learning environments in higher education. Journal of Educational Computing Research, 23(1), 101-111. Reeves, W. (1999). Learner-centered design: A cognitive view of managing complexity in product, information, and environmental design. Thousand Oaks, CA: Sage. Reichlmayr, T. (2005). Enhancing the student project team experience with blended learning techniques. In Proceedings of the 35th Annual Conference Frontiers in Education (FIE’05) (pp. T4F/6- T4F/11). Piscataway, NJ: IEEE. Reilly, R. (2005). Guest editorial Web-based instruction: Doing things better and doing better things. IEEE Transactions on Education, 48(4), 565-566. Rekkedal, T. (2006). Distance learning and e-learning quality for SMEs—State of the art. Paper prepared for the EU Leonardo Project, E-learning Quality for SMEs: Guidance and Counselling. Rengarajan, R. (2001). LCMS and LMS: Taking advantage of tight integration. Click 2 Learn. Retrieved October 28, 2007, from http://www.e-learn.cz/soubory/ lcms_and_lms.pdf Rhodes, J. E. (2002). A critical view of youth mentoring. Boston. Riding, R., & Rayner, S. (1998). Cognitive styles and learning strategies: Understanding style differences in learning and behaviour. London: David Fulton Publishers. R-Moreno, M.D. (2003). Representing and planning tasks with time and resources. Ph.D. Thesis, Universidad de Alcalá.
Compilation of References
R-Moreno, M.D., & Camacho, D. (2007). AI techniques for automatic learning design. In Proceedings of the International e-Conference of Computer Science (IeCCS 2006), Lecture Series on Computer and Computational Sciences (LSCCS) (vol. 8, pp. 193-197). VSP/Brill Academic Publishers. R-Moreno, M.D., Oddi, A., Borrajo, D., & Cesta, A. (2006). IPSS: A hybrid approach to planning and scheduling integration. IEEE Transactions on Knowledge and Data Engineering, 18(12), 1681-1695. Robey, D., & Bourdreau, M. (1999). Accounting for the contradictory organizational consequences of information technology: Theoretical directions and methodological implications. Information Systems Research, 10(2), 167-185. Robinson, A., & Udall, M. (2006). Using formative assessment to improve student learning through critical reflection. In C. Bryan & K. Clegg (Eds.), Innovative assessment in higher education (pp. 92-99). Oxon: Routledge. Rodríguez-Conde, M.J. (2006). Teaching evaluation in an e-learning enviroment. In E. Verdú, M.J. Verdú, J. García & R. López (Eds.), Best practices in e-learning. Towards a technology-based and quality education (pp. 55-70). Valladolid: BEM. Rodríguez-Conde, M.J., & Herrera-García, M.E. (2005). Assessment processes in tele-learning programmes. In F.J. García, J. García, M. López, R. López & E. Verdú (Eds.), Educational virtual spaces in practice the Odiseame approach. (pp. 161-178). Barcelona: Ariel. Rogers, E. M. (1995). Diffusion of innovations. New York: Free Press. Rohrer, R.M., & Swing, E. (1997). Web-based information visualization. Computer Graphics and Applications, IEEE, 17(1/4), 52-59. Romero, C., Ventura, S., De Bra, P., & Castro, C. (2003). Discovering prediction rules in AHA! courses. In 9th International Conference on User Modeling (pp. 25-34).
Romilly, J.D. (1992). The great sophists in Periclean Athens. Oxford, UK, New York: Clarendon Press, Oxford University Press. Rosales, C. (1990). Evaluar es reflexionar sobre la enseñanza. Madrid: Narcea. Rosen, A. (2006). Technology trends: E-learning 2.0. Learning Solutions E-Magazine. Retrieved October 25, 2007, from http://www.readygo.com/e-learning2.0.pdf Ross, B. (2004). First aimhigher e-mentoring networking meeting. Birmingham: Middlesex University. Ross, J.A., & Bruce, C.D. (2007). Teacher self-assessment: A mechanism for facilitating professional growth. Teaching & Teacher Education: An International Journal of Research and Studies, 23(2), 146-159. Rowntree, D. (1982). Educational technology in curriculum development. Newcastle upon Tyne, UK: Athenaeum Press Ltd. Rudner, L. M. (1998). An online, interactive, computer adaptive testing tutorial. Retrieved October 29, 2007, from http://edres.org/scripts/cat Ruttenbur, B., Spickler, G., & Lurie, S. (2000). E-learning—The engine of the knowledge economy. New York: Morgan Keegan & Co. Ryan, Y. (2000). Assessment in online teaching. Paper presented at the Australian Universities Teaching Committee Forum 2000, Canberra, Australia. Retrieved October 29, 2007, from http://www.autc.gov.au/forum/ papers/onlineteaching1.rtf Saaty, T. L. (1994). How to make a decision: The analytic hierarchy process. Interfaces, 24(6), 19-43. Saba, F. (1999). Is distance education comparable to traditional Education? Retrieved October 19, 2007, from http://www.distance-educator.com/der/comparable. html Sadler, D. R. (1989). Formative assessment and the design of instructional systems. Instructional Science, 18, 119-144.
Compilation of References
Sadler, D. R. (1998). Formative assessment: Revisiting the territory. Assessment in Education, 5(1), 77-84. Sahney, S., Banwet, D.K., & Karunes, S. (2004). Conceptualizing total quality management in higher education. The TQM Magazine, 16(2), 145-159. Salazar, A., Gosalbez, J., Bosch, I., Miralles, R., & Vergara, L. (2004). A case study of knowledge discovery on academic achievement, student desertion and student retention. In IEEE 2th International Conference on Information Technology: Research and Education (pp. 150-154). Salmon, G. (2000). E-moderating. The key to teaching and learning online. London: Kogan Page. Sambell, K., & Mcdowell, L. (1998). The construction of the hidden curriculum: Messages and meanings in the assessment of student learning. Assessment and Evaluation in Higher Education, 23(4), 391-402. Sanders, D., & Morrison-Shetlar, A. I. (2001). Student attitudes towards Web-enhanced instruction in an introductory biology course. Journal of Research on Computing in Education, 33(3), 251-262. Scalan, C. L. (2003). Reliability and validity of a student scale for assessing the quality of Internet-based distance learning. Distance Learning Administration, VII (III). Retrieved October 30, 2007, from http://www.westga. edu/~distance/ojdla/fall63/scanlan63.html Scalise, K., & Gifford, B. (2006). Computer-based assessment in e-learning: A framework for constructing “Intermediate Constraint” questions and tasks for technology platforms. The Journal of Technology, Learning, and Assessment, 4(6), 1-44. Schacht, P. (2006). The collaborative writing project. Retrieved October 25, 2007, from http://node51.cit.geneseo. edu/WIKKI_TEST/mediawiki/index.php/Main_Page Schellens, T., & Valcke, M. (2006). Fostering knowledge construction in university students through asynchronous discussion groups. Computers & Education, 46(4), 349-370.
Schmitz, C., Staab, S., Studer, R., Stumme, G., & Tane J. (2002). Accessing distributed learning repositories through a courseware watchdog. In Proceedings of the E-Learn 2002-World Conference on E-learning in Corporate, Government, Healthcare for Higher Education. Schön, D.A. (1971). Beyond the stable state: Public and private learning in a changing society. Temple Smith. Schön, D.A. (1983). The reflective practitioner. NewYork: Basic Books. Schön, D.A. (1987). Educating the reflective practitioner: Toward a new design for teaching and learning in the professions. San Francisco: Jossey-Bass Publishers. Schön, D.A. (1991). The reflective turn: Case studies in and on educational practice. New York: Teachers Press, Columbia University. Schulze, A., & O’Keefe, A. (2002, August). Effectively using self-assessment in online learning. Paper presented at the 18th Annual Conference on Distance Teaching Learning, Madison, Wisconsin. SCORM. (2006). Sharable Courseware Object Reference Model. Retrieved October 22, 2007, from http://www. academiccolab.org/projects/scorm.html Scott, D.W., & Sain, S.R. (2004). Multi-dimensional density estimation. In C. R. Rao, E. J. Wegman & J. L. Solka (Eds.), Handbook of Statistics, Data Mining and Computational Statistics, Vol. 24, (pp. 229-261). Elsevier. Scott, G. (2001). Assuring quality for online learning. Retrieved October 19, 2007, from http://www.qdu.uts. edu.au/pdf%20document/QA%20for%200 Scriven, M. (1967). The methodology of evaluation. In R. Tyler, R. Gagne & M. Scriven (Eds.), Perspectives of curriculum evaluation (pp. 39-83). Chicago: Rand McNally. Scriven, M. (1995). Student ratings offer useful input to teacher evaluations. Washington, DC: ERIC Clearinghouse on Assessment and Evaluation, Catholic University of America.
Compilation of References
Scriven, M. (1999). The nature of evaluation (Pts. I–II). Washington, DC: ERIC Clearinghouse on Assessment and Evaluation. SEEL. (2004). Quality guidelines for learning strategy and innovation, version 3. Supporting Excellence in E-Learning. Selim, H. (2007). Critical success factors for e-learning acceptance: Confirmatory factor models. Computers & Education, 49(2), 396-413. Selim, H. M. (2003). An empirical investigation of student acceptance of course Web sites. Computers and Education, 40, 343-360. Selim, H. M. (2005). Elearning critical success factors: an exploratory investigation of student perceptions. In M. Khosrow-Pour (Ed.), Managing modern organizations with information technology. Proceedings of the 2005 Information Resources Management Association International Conference (pp. 340-346). San Diego, CA: Information Resources management Association.
lifelong learning. Definition, modalities, methodology, competences and skills (CEUR Workshop Proceedings). Virtual Campus 2006. Selected and Extended Papers, 186, 41-55. Seoane, A.M., García, F.J., Bosom, Á., Fernández, E., & Hernández, M. J. (2007). Online tutoring methodology approach. International Journal of Continuing Engineering Education and Life-Long Learning (IJCEELL), 17(6), 479-492. Seufert, S., & Euler, D. (2005). Nachhaltigkeit von e-Learning-Innovationen: Fallstudien zu Implementierungsstrategien von e-Learning als Innovationen an Hochschulen. St. Gallen: Swiss Centre for Innovations in Learning. Retrieved October 30, 2007, from http://www. scil.ch/publications/docs/2005-01-seufert-euler-nachhaltigkeit-e-Learning.pdf SEVAQ. (2005). Self evaluation for quality in e-learning. Retrieved October 30, 2007, from http://www. sevaq.com
Semet, Y., Lutton, E., & Collet, P. (2003). Ant colony optimization for e-learning: Observing the emergence of pedagogic suggestions. In Proceedings of the 2003 IEEE Swarm Intelligence Symposium.
Sfard, A. (2000). Symbolizing mathematical reality into being—or how mathematical discourse and mathematical objects create each other. In P. Cobb, E.Yackel & K. McClain (Eds.), Symbolizing and Communicating in Mathematics Classrooms. Mahwah, NJ: Lawrence Erlbaum Associates. Shank, R. C. (2002). Designing world-class e-learning. McGraw Hill.
Seoane Pardo, A.M , & García Peñalvo, F.J. (in press). Tutoring & mentoring online. Definition, roles, skills and case studies. In G.D. Putnik & M.M. Cunha (Eds.), Encyclopedia of networked and virtual organizations. Hershey, PA: Idea Group Inc.
Sharpe, R., & Benfield, G. (2005). The student experience of e-learning in higher education. Brookes eJournal of Learning and Teaching, 3(1), 1-9. Retrieved October 19, 2007, from http://www.brookes.ac.uk/publications/bejlt/ volume1issue3/academic/sharpe_benfield.pdf
Seoane Pardo, A.M., & García Peñalvo, F.J. (2006). Determining quality for online Activities. Methodology and training of online tutors as a challenge for achieving the excellence. WSEAS Transactions on Advances in Engineering Education, 3(9), 823-830.
Shee, D., & Wang, Y. (in press). Multi-criteria evaluation of the Web-based e-learning system: A methodology based on learner satisfaction and its applications. Computers & Education.
Selwyn, N. (1999). Students’ attitudes towards computers in sixteen to nineteen education. Education and Information Technologies, 4(2), 129-141.
Seoane Pardo, A.M., García Peñalvo, F.J., Bosom Nieto, Á., Fernández Recio, E., & Hernández Tovar, M.J. (2006). Tutoring online as quality guarantee on e-learning-based
Sherman, R. C. (1998). Using the World Wide Web to teach everyday applications of social psychology. Teaching of Psychology, 25, 212-216.
Compilation of References
Shin, N., & Kim, J. (1999). An exploration of learner progress and dropout in Korea National Open University. Distance Education an International Journal, 20, 81-97.
Single, P. B., & Muller, C. B. (2000, April 24-28). Electronic mentoring: Quantifying the programmatic effort. Paper presented at the Annual meeting of the American Educational Research Association, New Orleans, LA.
Shuell, T. (1992). Designing instructional computing systems for meaningful learning. In M. Jones & P. Winne (Eds.), Adaptive learning environments. New York: Springer Verlag.
Single, P. B., & Muller, C. B. (2001). When e-mail and mentoring unite: The implementation of a nationwide electronic mentoring program--MentorNet, the national electronic industrial mentoring network for women in engineering and science. American Society for Training and Development.
Sicilia, M.A., Sánchez-Alonso, S., & García-Barriocanal, E. (2006, March 23-25). In Proceedings on Supporting the Process of Learning Design Through Planners. Virtual Campus 2006 Post-Proceedings, CEUR Workshop Proceedings (vol. 186). Barcelona, Spain. Siemens, G. (2005). Connectivism: A learning theory for the digital age. Retrieved October 25, 2007, from http://www.elearnspace.org/Articles/connectivism.htm Silander, P., & Rytkohen, A. (2005). An intelligent mobile tutoring tool enabling individualization of students’ learning processes. In Proceedings of the 4th World Conference on mLearning (paper 59), Cape Town, Republic of South Africa.
Single, P. B., & Muller, C. B. (2005). Electronic mentoring programs: A model to guide practice and research. Mentoring and Tutoring, 13(2), 305-320. Single, P. B., & Muller, C. B. (forthcoming). Electronic mentoring programs: A model to guide practice and research. Retrieved January 2008 from www.apesma. asu.au/mentorsonline/reference/pdfs/muller_and_boyle_single.pdf Single, P. B., & Single, R. M. (2005). E-mentoring for social equity: Review of research to inform program development. Mentoring and Tutoring, 13(2), 301-320.
Silvescu, A., Reinoso-Castillo, J., & Honavar, V. (2001). Ontology-driven information extraction and knowledge acquisition from heterogeneous, distributed, autonomous biological data sources. In International Joint Conferences on Artificial Intelligence (IJCAI) (pp. 1-10).
Single, P. B., & Single, R. M. (2005). Mentoring and the technology revolution: How face-to-face mentoring sets the stage for e-mentoring. In F. K. Kochan & J. T. Pascarelli (Eds.), Creating successful telementoring programs (pp. 7-27). Greenwich: Information Age Press.
Simons, R. J., van der Linden, J., & Duffy, T. (Eds.). (2000). New learning. Dordrecht: Kluwer Academic.
Single, R. M., Muller, C. B., Cunningham, C. M., & Single, R. M. (2000). Electronic communities: A forum for supporting women professionals and students in scientific fields. Journal of Women and Minority? Engineering, 6, 115-129.
Sinclair, R. M. S. (2003). Components of quality in distance education. In G. Davies & E. Stacey (Eds.), Quality education @ a distance (pp. 257-264). Boston: Kluwer Academic Publishers. Sinclair, R. M. S. (2003). Stakeholders’ views of quality in flexibly delivered courses. Unpublished Masters Research Paper. Deakin University, Australia: Geelong. Single, P. B., & Muller, C. B. (1999). Electronic mentoring: Issues to advance research and practice. Paper presented at the Annual Meeting of the International Mentoring Association, Atlanta, GA.
Sirvanci, M.B. (2004). TQM implementation: Critical issues for TQM implementation in higher education. The TQM Magazine, 16(6), 382-386. Sluijsmans, D. M. A., Prins, F. J., & Martens, R. L. (2006). The design of competency-based performance assessment in e-learning. Learning Environments Research, 9, 45-66.
Compilation of References
Small, M., & Lohrasbi, A. (2003). Student perspectives on online degrees and courses: An empirical analysis. International Journal on E-learning, 2(2), 15-28.
Stephens, D. (2001). Use of computer assisted assessment: Benefits to students and staff. Education for Information, 19, 265-275.
Smith, T., & Ingersoll, R. (2004). What are the effects of induction and mentoring on beginning teacher turnover? American Educational Research Journal, 41(3), 681-714.
Stephenson, J.E., Brown, C., & Griffin, D.K. (in press). Electronic delivery of lectures in the university environment: An empirical comparasion of three delivery styles. Computers & Education.
Snyder, B.R. (1971). The hidden curriculum. Cambridge, MA: MIT Press.
Stiggins, R. J. (1987). Design and development of performance assessment. Educational Measurement: Issues and Practice, 4, 263-273.
Song, L., Singleton, E. S., Hill, J. R., & Koh, M. H. (2004). Improving online learning: Students perceptions of useful and challenging characteristics. Internet and Higher Education, 7(1), 59-70. Southern Regional Education Board. (2000). Principles of good practice. Retrieved October 19, 2007, from http://ww.electroniccampus.org/student/srecinfor/publicaitons/Principles_2000.pdf. Sowa, J.F. (2000). Knowledge representation: Logical, philosophical and computational foundations. Pacific Grove, CA: Brooks Cole. SPI. (2003). Empre-learning: Promoção de Estruturas de e-Learning Inovadores, em Língua Portuguesa, que Permitam o Aumento de Competências e Aumentem a Empregabilidade. Porto: Sociedade Portuguesa de Inovação. Spillane, M.G. (1999, June). Portfolio assessment in higher education: Seeking credibility on the campus. Journal of the National Institute of the Assessment of Experiential Learning, 17-28. Srivastava, J., Cooley, R., Deshpande, M., & Tan, P. (2000). Web usage mining: Discovery and applications of usage patterns from web data. In SIGKDD Explorations (pp. 12-23).
Stockley, D. (2003). E-learning definition. Retrieved October 18, 2007, from http://derekstockley.com.au/elearning-definition.html Stokes, A. (2001). Using telementoring to deliver training to SMEs: A pilot study. Education + Training, 43(6), 317-324. Stokes, P., Garrett-Harris, R., & Hunt, K. (2003). An evaluation of electronic mentoring (e-mentoring). Paper presented at the 10th European Mentoring & Coaching Conference. Stuffebeam, D.L. (1999). Foundational models for 21st century program evaluation. Kalamazoo, MI: Western Michigan University, The Evaluation Center. Summerville, J. (1999). Role of awareness of cognitive style in hypermedia. International Journal of Educational Technology, 1. Sun, P., Tsai, R., Finger, G., Chen, Y., & Yeh, D. (in press). What drives a successful e-learning? An empirical investigation of the critical factors influencing learner satisfaction. Computers & Education.
Stake, R.E. & Cohernour, E.J. (1999). Evaluation of college teaching in a community of practice. University of Illinois.
Swan, M. K. (1995). Effectiveness of distance learning courses—Students’ perceptions. In Proceedings of the 22nd Annual National Agricultural Education Research Meeting (pp.34-38), Denver, CO. Retrieved October 19, 2007, from http://www.ssu.missouri.edu/SSU/AgEd/ NAERM/s-a-4.htm
Stake, R.E. (1999). Representing quality in evaluation. Paper presented at the annual meeting of the American Educational Research Association, Quebec, Canada.
Symons, S., & Symons, D. (2002). Using the Inter- and Intranet in a university introductory psychology course to promote active learning. In Proceedings of the Interna-
0
Compilation of References
tional Conference on Computers in Education (ICCE’02) (Vol. 2, pp. 844- 845). IEEE Computer Society. Tait, H., Entwistle, N.J., & McCune, V. (1998). ASSIST: A reconceptualisation of the approaches to studying inventory. In C. Rust (Ed.), Improving student learning (pp. 262-271). Oxford: Oxford Centre for Staff and Learning Development. Taras, M. (2001). The use of tutor feedback and student self-assessment in summative assessment tasks: Towards transparency for students and for tutors. Assessment and Evaluation in Higher Education, 26(6), 606-614. Taras, M. (2003). To feedback or not to feedback in student self-assessment. Assessment and Evaluation in Higher Education, 25(5), 549-565. Tastle, W. J., White, B. A., & Shackleton, P. (2005). E-learning in higher education: The challenge, effort, and return on investment. International Journal on ELearning, 4(2), 241-250. Tattersall, C., Manderveld, J., Van den Berg, B., Van Es, R., Janssen, J., & Koper, R. (2005). Swarm-based wayfinding support in open and distance learning. In E. M. Alkhalifa (Ed), Cognitively informed systems: Utilizing practical approaches to enrich information presentation and transfer (pp. 166-183). Taylor, A.W., & Hill, F.M. (1993). Issues for implementing TQM in further and higher education: The moderating influence of contextual variables. Quality Assurance in Education, 1(2), 12-21. Tesone, D. V., & Gibson, J. W. (2001, October). E-mentoring for professional growth. Paper presented at the IEEE International Professional Communication Conference, Santa Fe, NM. Thorson, D. (2006). Marketing 2.0: The constellation. Retrieved October 25, 2007, from http://donthorson.typepad.com/don_thorson/2006/04/the_constellati.html Timmerman, B., & Lingard, R. (2003). Assessment of active learning with upper division computer science students. In Proceedings of the 33rd Annual Conference Frontiers in Education (FIE’03) (Vol. 3, pp. S1D/7 S1D/12). Piscataway, NJ: IEEE.
TISIP. (2007). QUIS—Quality, interoperability and standards in e-learning. Trondheim: TISIP Research Foundation. Titcomb, S. L., Foote, R. M., & Carpenter, H. J. (2004). A model for a successful high school engineering design competition. In Proceedings of the 34th Annual Conference Frontiers in Education (FIE’04) (Vol. 1, pp. 138-141). Piscataway, NJ: IEEE. Trow, M. (1973). Problems in the transition from elite to mass higher education. Berkley, CA: Carnegie Commission on Higher Education. Trowler, P., & Knight, P.T. (2000). Coming to know in higher education: Theorising faculty entry to new work contexts. Higher Education Research & Development, 19(1). Trowler, P., Saunders, M., & Knight, P.T. (2003). Change thinking, change practices: A guide to change for heads of department, programme leaders and other change agents in higher education. Learning and Teaching Support Network, Generic Centre. Tu, C. (2000). Critical examination of factors affecting interaction on CMC. Journal of Network and Computer Applications, 23, 39-58. Tullous, R., & Utecht, R. L. (1994). A decision support system for integration of vendor selection tasks. Journal of Applied Business Research, 10(1), 132-144. Twigg, C.A. (1994). The changing definition of learning. Educom Review, 29(4). Ullmo, P. -A., & Ehlers, U. -D. (2007). Quality in elearning. E-Learning Papers, 2. Retrieved October 30, 2007, from http://www.e-Learningpapers.eu/index. php?page=volume Ullrich, C. (2005). Course generation based on HTN planning. In Proceedings of 13th Annual Workshop of the SIG Adaptivity and User Modeling in Interactive Systems (pp. 74-79). Untersteiner, M. (1954). The sophists. New York: Philosophical Library.
Compilation of References
Uribe, C. L., Schweikhart, S. B., Pathak, D. S., Marsh, G. B., & Fraley, R .R. (2002). Perceived barriers to medicalerror reporting: An exploratory investigation. Journal of Healthcare Management, 47(4), 263-280. Uschold, M., King, M., Morales, S., & Zorgios, Y. (1998). The enterprise ontology. Knowledge Engineering Review, 13, 32-89. Valenta, A., Theriault, D., Dieter, M., & Mrtek, R. (2001). Identifying student attitudes and learning styles in distance education. Journal of Asynchronous Learning Networks, 5(2), 111-127. Valiathan, P. (2002). Blended learning models. Learning Circuits. Retrieved October 18, 2007, from http://www. learningcircuits.org/2002/aug2002/valiathan.html Valigiani, G., Jamont, Y., Biojout, R., Lutton E., & Collet, P. (2005). Experimenting with a real-size man-hill to optimize pedagogical paths. In H. Haddad, L. Liebrock, A. Omicini & R. Wainwright (Eds.), ACM Symposium on Applied Computing (pp. p4-8). Valigiani, G., Lutton, E., Jamont, Y., Biojout, R., & Collet, P. (2006). Automatic rating process to audit a manhill. WSEAS Transactions on Advances in Engineering Education, 3(1), 1-7. van Bruggen, J., Sloep, P., Van Rosmalen, P., Brouns, F., Vogten, H., & Koper, R. (2004). Latent semantic analysis as a tool for learner positioning in learning networks for lifelong learning. British Journal of Educational Technology, 35(6), 729-738. Van der Linde, G. (2005). The perception of business students at PUCMM of the use of collaborative learning using the BSCW as a tool. In Proceedings of the 6th International Conference on Information Technology Based Higher Education and Training (ITHET 2005) (pp. F2D/10- F2D/15). Piscataway, NJ: IEEE. Van Merriëboer, J. J. G. (1997). Training complex cognitive skills. Englewood Cliffs, NJ: Educational Technology Publications. Velan, G. M., Killen, M. T., Dziegielewski, M., & Kumar, R. K. (2002). Development and evaluation of a compu-
ter-assisted learning module on glomerulonephritis for medical students. Medical Teacher, 24(4), 412-416. Verdú, E., Regueras, L. M., Verdú, M. J., Pérez, M. A., & de Castro, J. P. (2006). Improving the higher education through technology-based active methodologies: A case study. WSEAS Transactions on Advances in Engineering Education, 3(7), 649-656. Verdú, E., Regueras, L. M., Verdú, M. J., Pérez, M. A., & de Castro, J. P. (2006). QUEST: A contest-based approach to technology-enhanced active learning in higher education. In S. Impedovo, D. Kalpic & Z. Stjepanovic (Eds.), Proceedings of 6th WSEAS International Conference on Distance Learning and Wb Engineering (DIWEB ‘06) (pp. 10-15). Wisconsin: WSEAS. Verdú, E., Regueras, L. M., Verdú, M. J., Pérez, M. A., & de Castro, J. P. (2006). Improving the higher education through technology-based active methodologies: A case study. WSEAS Transactions on Advances in Engineering Education, 3(7), 649-656. Verdú, E., Verdú, M. J., Regueras, L. M., & de Castro, J. P. (2005). Intercultural and multilingual e-learning to bridge the digital divide. Lecture Notes in Computer Sciences, 3597, 260-269. Verdú, M. J., de Castro, J. P., Pérez, M. A., Verdú, E., & Regueras, L. M. (2006). Application of TIC-based active methodologies in the framework of the new model of university education: The educational interaction system QUEST. In F. J. García, J. Lozano & F. Lamamie de Clairac (Eds.), CEUR Workshop Proceedings, Virtual Campus 2006 Postproceedings. Selected and Extended Papers (Vol. 186, pp. 33-40). CEUR-WS.org. Retrieved October 26, 2007, from http://CEUR-WS.org/Vol-186/ Verhaart, M., & Kinshuk, C-K., (2004). Adding semantics and context to media resources to enable efficient construction to learning objects. In C. Kinshuk, K. Looi, E. Sutinen, D. G. Sampson, I. Aedo, L. Uden, & E. Kähkönen (Eds.), Proceedings of the 4th International Conference on Advanced Learning Technologies (pp. 651-653). Joensuu, Finland: IEEE Computer Society.
Compilation of References
Vermetten, Y.J., Vermunt, J.D., & Lodewijks, H.G. (2002). Powerful learning environments? How university students differ in their response to instructional measures. Learning and Instruction, 12, 263-284. Vermunt, J.D. (1998). The regulation of constructive learning processes. British Journal of Educational Psychology, 67, 149-171. Vinicini, P. (2001). The use of participatory design methods in a learner-centered design process. ITFORUM 54. Retrieved October 18, 2007, from http://it.coe.uga. edu/itforum/paper54/paper54.html Vora, P. (1998). Human factors methodology for designing Web sites. In C. Forsythe, E. Grose & J. Ratner (Eds.), Human factors and Web development. Hillsdale, NJ: Lawrence Erlbaum. Vygotsky, L.S., & Cole, M. (1978). Mind in society: The development of higher psychological processes. Cambridge, MA: Harvard University Press. Wadia-Fascetti, S., & Leventman, P. G. (2000). Ementoring: A longitudinal approach to mentoring relationships for women pursuing technical careers. Journal of Engineering Education, 89(3), 295-300. Wang, K. H., Wang, T. H., Wang, W. L., & Huang, S. C. (2006). Learning styles and formative assessment strategies: Enhancing student achievement in Webbased learning. Journal of Computer Assisted Learning, 22(3), 207. Warburton, B., & Conole, G. (2003). Key findings form recent literature on computer-aided assessment (pp. 1-19). ALTC-C University of Southampton. Ward, M., & Newlands, D. (1998). Use of the Web in undergraduate teaching. Computers and Education, 31, 171-184. Waxman, H. C., Lin, M., & Michko, G. M. (2003). A meta-analysis of the effectiveness of teaching and learning with technology on students outcomes. Naperville, IL: Learning Point Associates. Webster, J., & Hackley, P. (1997). Teaching effectiveness in technology mediated distance learning. Academy of Management Journal, 40(6), 1282-1309.
Webster, W.R. (2002, July). Metacognition and the autonomous learner: Student reflections on cognitive profiles and learning environment development. In A. Goody (Ed.), Spheres of influence: Ventures and visions in educational development. Proceedings of ICED 2002, UWA, Perth, Australia: University of Western Australia. Webster, W.R. (2003). Cognitive styles, metacognition and the design of e-learning environments. In F. Albalooshi (Ed.), Virtual education: Cases in teaching and learning (pp. 225-240). Hershey, PA: Idea Group Publishing. Webster, W.R. (2004, November 2-3). A learner-centred methodology for learning environment design and development. In Exploring integrated learning environments. Proceedings, Online Learning and Training 2004, Brisbane. Brisbane, Australia: Queensland University of Technology. Webster, W.R. (2005). A reflective and participatory approach to the design of personalised learning environments. Unpublished PhD Thesis, Lancaster, Lancaster University. Webster’s Online Dictionary. (n.d.). Retrieved October 30, 2007, from http://www.websters-online-dictionary. org/definition/quality Weigand, H. (1997). Multilingual ontology-based lexicon for news filtering. In IJCAI Workshop on Multilingual Ontologies (pp. 138-159). Weil, S. (1999). Re-creating universities for beyond the stable state: From dearingesque systematic control to post-dearing systemic learning and inquiry. Systems Research and Behavioral Science, 16, 170-190. Welling, L., & Thomson, L. (2003). Using session control in PHP. In Sams Publishing (Ed.), Php and mysql Web development. Developer’s Library. Wells, P., Fieger, P., & de Lange, P. (2005, July). Integrating a virtual learning environment into a second year accounting course: Determinants of overall student perception. Paper presented at the 2005 Accounting and Finance Association of Australia and New Zealand Con-
Compilation of References
ference, Melbourne, Australia: Accounting and Finance Association of Australia and New Zealand. Wenger, E. (1998). Communities of practice learning, meaning, and identity. Cambridge University Press. Wenger, E. (1998). Communities of practice. Learning as a social system. Retrieved October 17, 2007, from http://www.co-i-l.com/coil/knowledge-garden/cop/lss. shtml Wenger, E. (1998). Communities of practice. Learning, meaning and identity. Cambridge: Cambridge University Press. Wentling, T. L., Waight, C., Gallaher, J., La Fleur, J., Wang, C., & Kanfer, A. (2000). E-learning - a review of literature. Knowledge and Learning Systems Group, University of Illinois at Urbana-Champaign. Retrieved October 19, 2007, from http://learning.ncsa.uiuc.edu/papers/elearnlit.pdf . Western Interstate Commission for Higher Education. (2002). Best practice for electronically offered degree and certificate programs. Retrieved October 19, 2007, from http://www.wiche.edu/telecom/Article1.htm Wiggins, G. (1989). A true test: Toward a more authentic and equitable assessment. Phi Delta Kappan, 70, 703-713. William, D., & Black, P. (1996). Meanings and consequences: A basis for distinguishing formative and summative functions of assessment. British Educational Research Journal, 22, 537-48. Williams, P. E. (2003). Roles and competencies for distance education programs in higher education institutions. The American Journal of Distance Education, 17(1), 45-57. Williams, S.C., Davis, M.L., Metcalf, D., & Covington, V.M. (2003). The evolution of a process portfolio as an assessment system in a teacher education program. Current Issues in Education, 6(1). Retrieved October 28, 2007, from http://cie.ed.asu.edu/volume6/number1/ Wilson, B.G. (1995). Situated instructional design: Blurring the distinctions between theory and practice, design
and implementation, curriculum and instruction. In M. Simonson (Ed.), Proceedings of selected research and development presentations. Washington, DC: Association for Educational Communications and Technology. Retrieved October 18, 2007, from http://carbon.cudenver. edu/~bwilson/sitid.html Wilson, B.G. (1996). What is a constructivist learning environment? In B.G. Wilson (Ed.), Constructivist learning environments: Case studies in instructional design (pp. 3-8). Educational Technology Publications. Winch, C. (1996). Quality in education. Oxford: Blackwell. Wirsig, S. (2002). ¿Cuál es el lugar de la tecnología en la educación? Retrieved October 26, 2007, from http:// www.educoas.com/Portal/xbak2/temporario1/latitud/ Wirsig_Tic_en_Educacion.doc Wise, J. C., Lall, D., Shull, P. J., Sathianathan, D., & Lee, S. H. (2006). Using Web-enabled technology in a performance-based accreditation enviroment. In S.L. Howell & M. Hricko (Eds.), Online assessment and measurement. Cases Studies from higher education, K-12 and corporate. (pp. 98-115). London: Information Science Publishing. Woolfolk, A. (2001). Educational psychology. Boston: Allyn and Bacon. World Alliance in Distance Education, (2002). World alliance in distance education. Retrieved October 19, 2007, from http://www.wade-universities.org/index.htm Wright, C. R. (2003). Criteria for evaluating the quality of online courses. Retrieved October 30, 2007, from http:// www.imd.macewan.ca/imd/content.php?contentid=36 Wu, D., & Hiltz, S. R. (2004). Predicting learning from asynchronous online discussions. Journal of Asynchronous Learning Networks, 8(2), 139-152. Xenos, M. (2004). Prediction and assessment of student behaviour in open and distance education in computers using Bayesian networks. Computers & Education, 43(4), 345-359. Yin, R. K. (1984). Case study research: Design and methods. Thousand Oaks, CA: Sage.
Compilation of References
Yu, F. Y., Chang L. J., Liu, Y. H., & Chan, T. W. (2002). Learning preferences towards computerised competitive modes. Journal of Computer-Assisted Learning, 18(3), 341-350.
Zang, W., & Lin, F. (2003). Investigation of Web-based teaching and learning by boosting algorithms. In IEEE International Conference on Information Technology: Research and Education (pp. 445-449).
Zabalza, M.A. (2001). Evaluación de los aprendizajes en la Universidad. In A. García-Valcárcel (Ed.), Didáctica Universitaria (pp.261-291). Madrid: La Muralla.
Zeithaml, V.A., Parasuraman, A., & Berry, L.L. (1990). Delivering quality service: Balancing customer perceptions and expectations. New York: Free Press.
Zakrzewske, S., & Steven, C. (2003). Computer-based assessment. Quality assurance issues, the hub of the wheel. Assessment & Evaluation in Higher Education, 28(6), 609-623.
Ziehe, A., & Müller, K.R. (1998). TDSEP-an efficient algorithm for blind separation using time structure. In 8th International Conference on Artificial Neural Networks (pp. 675-680).
Zakrzewski, S., & Bull, J. (1999). The mass implementation and evaluation of computer-based assessments. Assessment & Evaluation in Higher Education, 23(2), 141-152.
Zywno, M. S., & Waalen, J. K. (2002). The effect of individual learning styles on student outcomes in technology-enabled education. Global Journal of Engineering Education, 6(1), 35-44.
About the Contributors
Francisco José García-Peñalvo (1971) graduated from University of Valladolid with a degree in Computer Science, later to obtain a PhD at the University of Salamanca, and is currently professor Titular de Universidad (a senior lecturer) in the Computer Science Department of the University of Salamanca. He leads the GRoup on InterAction & eLearning (GRIAL), a research group whose main lines of work are human-computer interaction, Web engineering, software engineering, educative computing and communications theory. He has published in excess of 100 papers in international publications and conferences, and has participated in more than 20 research projects. He currently teaches in the Programa de Doctorado del Departamento de Informática (Computer Science Doctorate Program, which has been awarded the Quality Mention of ANECA since Academic Course 2003-2004), as well as in the Máster Oficial en Sistemas Inteligentes (Intelligent Systems Master), in the Máster Oficial en TIC Aplicadas a la Educación (ICT applied to Education Master), and in the Programa de Doctorado Procesos de Formación en Espacios Virtuales (Educational Virtual Spaces Doctorate Program) at the University of Salamanca. Dr. García-Peñalvo is currently Director of the Experto/Máster en eLearning: Tecnologías y Métodos de Formación en Red (eLearning Master). Lastly, as concerns administrative tasks, he is currently Vice-Director of Technology Innovation at his University. *** Giovannina Albano has been assistant professor of Geometry at the University of Salerno (Italy), since 1997, and for a long time she has been a teacher in several mathematics courses at the Faculty of Engineering. Her research interests are in mathematics education and in educational models for e-learning environments. She has contributed to the educational theoretical framework implemented in the e-learning platform IWT. Her research topics include knowledge domain representation, cooperative learning, and computer-based mathematics learning. She has also investigated the affective impact of the use of e-learning platform. David Camacho has been a lecturer in the Department of Computer Science at Universidad Autonoma de Madrid since 2005. He holds a PhD in Computer Science from Universidad Carlos III de Madrid (Spain), obtained in 2001, and a BS in Physics from Universidad Complutense de Madrid (1994). He has worked in several areas, including multi-agent systems, automated planning, Web intelligence and e-learning. He has also participated in several projects about automatic machine translation, optimising industry processes, multi-agent technologies, intelligent systems and virtual education.
Copyright © 2008, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
About the Contributors
Juan Pablo de Castro was awarded his master’s degree in Telecommunications Engineering (1996) and his PhD (2000). He has been working as a lecturer at the University of Valladolid (Spain) since November 1996. He was the research director of a technological centre from February 2001 to June 2003, overseeing several projects involving a staff of 40. He has participated in projects in the fields of telemedicine, e-learning and the Information Society. In addition, he has published papers in international journals and in relevant conference proceedings related to these fields. He currently acts as R&D consultant for various enterprises. Miguel Ángel Conde was born in 1980 in Salamanca (Spain). He has been studying computer science in Salamanca (Spain) since 2006. From 2002 to 2004, he worked in the educational environment teaching in several courses related to computers. In 2004, he decided to begin working in software development environment and worked in GPM. In 2005, he began working in the Clay Formación Internacional R&D department where he was involved in different e-learning projects. He works there still and also works as a teacher in Salamanca University. Miguel Ángel belongs to GRIAL (Group for Research on InterAction and eLearning) Research Group where he specializes in e-learning and Web development. Olga Díez Fernández holds a PhD in Classical Philology at the Universidad de Sevilla (1995). She belongs to the Educational Department at the Govern of the Canary Islands - Spain, and has taught at a secondary school specializing in distance learning since 1992. She was the coordinator of adult education and distance learning for the government of the Canary Islands, Spain during 2005-2006, was a tutor online and coordinator of the teaching training for adult and distance learning program of the Educational Department of the government of the Canary Islands from 2006-2007, and was the coordinator of Content Development for Upper Secondary Online Courses during 2005-2006, an online teacher for Upper Secondary Online at the CEAD, and a member of the Grupo de Investigación en Interacción y e-Learning (GRIAL). Pier Luigi Ferrari is full professor of mathematics education at the University of East Piedmont (Italy). He earned his degree in 1975 at the University of Genova. His earlier research fields were mathematical logic and categorical algebra. As regards his current research interests, he has investigated mathematical language and the interactions between semiotic systems and mathematics learning. He widely uses technology in his mathematics classes and has carried out a number of studies on the use of ICTs at undergraduate level. Raised in Germany, Evelyn Gullett’s 23 years of professional experience embrace a variety of responsibilities ranging from strategic business, to HRM, corporate training and development, and organizational behaviour in both national and international settings. She worked on projects in various industries ranging from international hotel and tourism management, airline industry, retail, hospital administration, and education, as well as the federal government. Evelyn Gullett holds an MBA, an MA in Organizational Development, and a PhD in Human and Organizational Systems. She has been teaching for 10 years and is currently an assistant professor and faculty manager at U21Global, a consortium of 20 research-intensive universities worldwide.
About the Contributors
Sergio Gutiérrez got his MEng in Telecommunication Engineering at University Carlos III of Madrid (Spain) in 2002. He got his PhD in Communication Technologies from the same university in 2007, where he works now as a research and teaching assistant. In 2005, he was selected by the Spanish Agency of Research Quality Evalution (ANECA) as one of the best 100 Ph.D. students in Spain. His research interests include autoorganizative systems, the application of swarm intelligence techniques to social systems, elearning. Ruth Halperin holds a PhD in Information Systems from the London School of Economics and Political Science, where currently she is a research fellow in the Information Systems and Innovation Group of the Department of Management. Her research interests include design and implementation of learning technologies and technology-mediated learning practices. Her current research focuses on the social aspects of identity in the Information Society. She is a member of the EU funded Network of Excellence FIDIS working in areas such as interoperability and profiling. Prior to joining the LSE in 2002, she was a project manager of a leading software development company specializing in e-learning and KM technologies. Nuria Hernández Nanclares is currently a teacher in the economics faculty, Applied Economics Department, from University of Oviedo, Spain (Profesora Titular de Economía Aplicada). She has finished her graduate degree in Economics, with a Special Award in 1992 (Premio Fin de Carrera “Valentín Andrés Alvarez). She earned her PhD in Economics in University of Oviedo getting the Special Prize (Premio Extraordinario de Doctorado) in the course 1999. She works in the International Economics field, both in research and teaching. She is very interested in everything related with high university teaching, especially in teaching economics. She has done several courses related with effective teaching, alternative assessment techniques, and teaching methodologies. Specially, she has visited the University of Maastricht (NL) to follow a course about “Problem Based Learning” methodology. She has presented papers in different national and international meetings about teaching and learning at university. She has also coordinated several “Innovative Teaching Projects” developed in University of Oviedo, some of them related to new technologies and its use in teaching, and others associated to new assessment alternatives. She is also collaborative teacher in the Education Institute (Instituto de Ciencias de la Educación) in her University. Izaskun Ibabe Erostarbe, a doctor of psychology and professor at the University of the Basque Country, has taught the subject “New Technologies and Simulation in Psychology Research” for 8 years. She designed the Web site of the faculty of psychology and the Department of Social Psychology and Methodologies of Behaviour Sciences. Joana Jauregizar Albonigamayor holds a doctoral degree in psychology and baccalaureate degree in Psychopedagogy. Jauregizar has been lecturer at the University of the Basque Country, and at present works at the Quality Evaluation and Certification Agency of the Basque University System, which has recently published a book: Ibabe, I., & Jauregizar, J. (2005). Cómo crear una Web docente de calidad (How to design a quality teaching Web site). A Coruña: Netbiblo. The book tries to guide university teachers in the use of didactic resources available on the Internet, so as to facilitate the autonomous work of students. Izaskun Ibabe coordinated two innovative projects, in which Joana Jauregizar took
About the Contributors
part: “An Interactive Self-Assessment Tool Application” and “Continuous Assessment of Teaching and Learning Process.” They have also many publications about formative assessment. Célio Gonçalo Marques holds a 5-year degree in Computer Science and Management from Management School of Santarém (Portugal), a masters’ degree in Multimedia Educational Communication from Open University (Portugal), a post-graduation course in E-Learning Techniques and Contexts from Coimbra University (Portugal) and is presently taking a PhD in Education, specialty of Educational Technology at University of Minho (Portugal). He has been involved in projects related with education-oriented computing including the Portuguese Program “Internet at School” and worked as a computer consultant for several enterprises. Presently he is an assistant professor in the Department of Information and Communication Technologies at the Management School of the Polytechnic Institute of Tomar. He has authored various publications with emphasis to the book “Os Hipermédia no Ensino Superior” (“Hypermedia in Higher Education”). His research has been oriented towards the design, development, and evaluation of interactive learning environments, the application of cognitive flexibility theory, and e-learning. Carlos Muñoz Martín was born in Salamanca (Spain). D. Carlos Muñoz Martín studied Computer Science at the University of Salamanca (Spain) an is an expert in e-commerce at the University of Salamanca. From June to September of 2005, he worked for M2C developing a traceability program for agrarian products. In 2005, D. Carlos Muñoz joined the I+D+i team of Clay Formación Internacional and began work on proyects related with e-learning, e-learning platforms and Web development. João Noivo holds a 5-year degree in Telecommunications and Electronics Engineering from the University of Aveiro, Portugal, a specialization in Industrial Engineering from LNETI, Portugal and a masters’ degree in Operations Research and Statistics from the University of Lisbon, Portugal. He was an assistant professor in Industrial Electronics Department of University of Minho. In this department, he participated in management activities and organized its informatics area. He was also responsible of department that implemented the e-learning in University of Minho. He frequented post-graduate courses in e-learning and public administration and his current research interests are e-learning and organizational architecture. Alvaro Ortigosa is associate professor of Computer Science at Universidad Autonoma de Madrid and member of the research group GHIA. He received his BA degree in Computer Science from the Universidad Nacional del Centro de la Provincia de Buenos Aires, Argentina, his MS in Computer Science from the Universidade Federal de Rio Grande do Sul, Brazil, and his PhD in Computer Science from the Universidad Autonoma de Madrid. Dr. Ortigosa has worked in software engineering support environments, software reuse and, more recently, in adaptive systems, collaborative systems, user modelling, mobile environments, and authoring and evaluation of adaptive systems. Abelardo Pardo is an associate professor of Telematic Engineering at theCarlos III University of Madrid. He received his PhD in Computer Science from theUniversity of Colorado at Boulder. His research interests are in the area of computer supported learning. He is the principal investigator of the mosaicLearning2 project on e-learning platforms, tutoring systems, user modeling, and adaptive hypermedia.
About the Contributors
María Ángeles Pérez was awarded her master’s degree in Telecommunications Engineering (1996) and her PhD (1999). She has been working as a lecturer at the University of Valladolid since October 1996. She has experience in coordinating projects related to telematic applications for the Information Society, mainly concerning the application of information and communication technologies (ICT) to the learning process. She also has experience in the evaluation of preproposals, proposals, and final reports of projects cofunded by the European Commission. She is author or co-author of various publications and contributions to conferences. Krassie Petrova is a senior lecturer at the School of Computing and Mathematical Sciences at Auckland University of Technology, Auckland, New Zealand. She has over 10 years of international experience of consulting in information systems development and management, and over 15 years of university teaching and research in information systems, programming, data communications, and networks. Krassie has published and presented in New Zealand and internationally. Her research areas are e- and m-learning, IS/IT curriculum development and student skills and capabilities building, and mobile business (including mobile applications, adoption, and business models). Currently, Krassie is the programme leader of the Master of Computer and Information Sciences (MCIS), and the co-editor of the New Zealand Bulletin of Applied Computing and Information Technology (BACIT). See also http://www.aut.ac.nz/schools/computing_and_mathematical_sciences/our_staff/krassie_petrova/ Estrella Pulido received a degree in Computer Science (1989) from the Facultad de Informática of the Universidad Politécnica de Madrid. She worked in industry from 1987 until 1991, in R&D departments. She obtained an MSc in Artificial Intelligence with honours from the University of Bristol (1992) and was granted with a faculty fellowship from the same University to carry out a PhD (1996). She has worked at the Escuela Politécnica Superior of the Universidad Autónoma de Madrid since October 1996, where she has held an associate professor position since July 2000. She has worked on several projects related to Web-based education and user adaptation. Luisa M. Regueras was awarded her master’s degree in Telecommunications Engineering in 1998 and her PhD (2003). She has worked as a lecturer at the University of Valladolid in Spain since 1999, and is actively involved in developing projects related to the application of ICT to the learning process (e-learning) and telecommunication networks. Her present work involves research into new e-learning technologies. She is author or co-author of various publications and contributions to national and international conferences and congresses. Angélica Rísquez BA (psych), MBS. Angélica has been a researcher at the Centre for Teaching and Learning at the University of Limerick since 2003. She is committed to the use of innovative teacher and student support mechanisms at third level. From 2004 to 2006, she was responsible for designing, coordinating, and evaluating a program of peer e-mentoring to facilitate first year students’ adjustment to university. She is currently involved on the implementation of the learning management system at her institution as a technology enhanced learning facilitator. She also supports the use of plagiarism prevention software, and is involved in initiatives involving using technology for intercultural education, research skills, and so forth.
0
About the Contributors
María D. R-Moreno earned a PhD (2004) in Computer Sciences by the Universidad de Alcalá (Spain) with the distinction of the European Doctorate. During her PhD she performed several visits to different international centers such as British Telecom Adastral Park in UK and the CNR in Rome. In 2006, she was at NASA Ames Research Center as a Postdoc, working at the Autonomous Systems and Robotics group. During the summer of 2007, she was a Visiting Researcher at ESA in the ExoMars mission. Her research interests are in the area of planning, scheduling, monitoring, and execution applied to real domains as satellites, workflow, and e-learning. María José Rodríguez-Conde earned her PhD in Education (University of Salamanca, Spain, 1994). She has been an associate professor of Research Methods in Education at the Faculty of Education of the University of Salamanca since 1999, an expert in evaluation methodology and statistical analysis of data in social sciences. She leads the Research Group in Educational Evaluation and Guidance (GE2O), is a collaborator in the Group of Research in Interaction and E-learning (GRIAL) of the University of Salamanca, and is the director of several projects on evaluation in education. GRIAL’S last published work dealt with the evaluation processes in e-learning and it has directed several doctoral dissertations centered in the processes of evaluation of programmes in education. Addisson Salazar is working towards a doctorate degree in Telecommunications at Universidad Politécnica de Valencia (UPV). He has received the BSc and MSc in Informatics from the Universidad Industrial de Santander and a D.E.A. in Telecommunications from UPV in 2003. He is a researcher of the Signal Processing Group of the Institute of Telecommunication and Multimedia Applications at UPV. His research interest is focused on statistical signal processing, pattern recognition, and data mining and knowledge discovery, where he has worked in different theoretical an applied problems, many of them under contract with the industry. His theoretical aspects of interest are signal classification, time-frequency analysis, independent component analysis, and algorithms for data mining. He has participated in different Programmes of the European Community. He has published more than 70 papers including journals and conference contributions. Antonio Seoane (1971) holds a 5-year degree in Philosophy from the University of Salamanca (Spain) and has worked as high-school professor at the Department of Philosophy since 1998; he also teaches for E-Learning masters degrees at both the University of Alcalá de Henares and the University of Salamanca (Spain). Currently, he is finishing two doctoral theses on ancient greek rhetoric and online training methodology, respectively. His researching interest fields are the ancient rhetoric, the modern communication theory, e-learning, and online training methodology. He is an active member of the Researching GRoup on InterAction & eLearning (GRIAL) at the University of Salamanca, where he acts as Academic Coordinator of Training Activities, standing out the E-Learning Master and the Online Tutor Lifelong Learning Diploma, both from the University of Salamanca. He is author of several articles and chapters regarding his researching fields and published in recognized reviews and volumes. Rowena Sinclair is a senior lecturer in the Accounting Department of the School of Business at Auckland University of Technology, Auckland, New Zealand. As a chartered accountant, Rowena has over 10 years of extensive accounting and auditing experience in both the private and public sector working with both small enterprises through to large multinational corporations. Her teaching areas currently focus on auditing and contextualizing accounting within the workplace. Her research is cur-
About the Contributors
rently focusing on the transparency of the financial reports of charities for which she is completing a PhD. Previously, Rowena was involved in teaching in the banking field reflecting her years of service in the banking industry for which she was recognized by being given the award of senior associate of the Financial Services Institute of Australasia. Further recognition of this expertise was reflected by being asked to appear on television where she was interviewed on various banking issues. Sergio Vásquez Bronfman is professor of Information Systems at ESCP-EAP (European School of Management), in Paris. Since 1983, he has been involved in research and practice of ICT-based learning. In the 1990s, he worked mainly in university settings in France developing Internet-based educational innovations, while, since 2000, he has been doing e-learning projects in Spain, in corporate settings. His work focus is on (a) on the design of learning systems aimed at bridging the gap between “knowing” and “doing,” hence increasing the return on investment of professional and corporate education, and (b) on the political questions and power games that exist in e-learning and knowledge management projects and which are key in order to ensure implementation success. Alberto Velasco Florines was born in Salamanca (Spain) in 1981. He studied computer science engineering in the University of Salamanca (Spain) from 1999 to 2007. He worked as a system administrator and Web developer in 2006. He has been working as a developer for the R&D department of Clay Formación Internacional since November 2006. He has been developing a mobility system for an own e-learning platform and has collaborated in the development of an application for statistic control in Moodle. Elena Verdú, telecommunications engineer, has been project manager at CEDETEL (Centre for the Development of Telecommunications of “Castilla y León”) since December 2000, coordinating research projects in the fields of new telematic applications for the Information Society, communication networks and software engineering, at different scopes (regional, national, and international). She has published papers in international journals and participated in international conferences and congresses. She is also an associate lecturer at the “Escuela Técnica Superior de Ingenieros de Telecomunicación” (the School of Telecommunications Engineering), at the University of Valladolid, Spain. María Jesús Verdú received both an MS and PhD in Telecommunications Engineering from the University of Valladolid, Spain, in 1996 and 1999, respectively. She has been working as a lecturer at the University of Valladolid since November 1996. She has experience in coordinating projects in the fields of new telematic applications for the Information Society and telecommunications networks, especially related to e-learning. She has published papers in international journals and in relevant conference proceedings related to these fields. Luis Vergara was born in Madrid (Spain) in 1956. He received the Ingeniero de Telecomunicación and the Doctor Ingeniero de Telecomunicación degrees from the Universidad Politécnica de Madrid (UPM), in 1980 and 1983, respectively. Until 1992 he worked at the Departamento de Señales, Sistemas y Radiocomunicaciones (UPM) as an associate professor. In 1992, he joined the Departamento de Comunicaciones (Universidad Politécnica de Valencia (UPV) as Department Head until April 2004. From April 2004 to April 2005 he was vice-director of New Technologies at the UPV. Currently he is responsible of the Signal Processing Group of the Institute of Telecommunication and Multimedia
About the Contributors
Applications at UPV. His research concentrates in the statistical signal processing area, where he has worked in different theoretical an applied problems, many of them under contract with the industry. He has participated in different international actions, particularly in NATO projects and in different Programmes of the European Community. He has published more than 150 papers including journals and conference contributions. Ray Webster is currently associate professor (adjunct) in the School of Information Technology, Murdoch University, Australia. He has a PhD in Educational Research from Lancaster University, an MSc in Cognitive Science (Brunel) and a BA (Hons) in Economics and Geography (Middlesex) as well as graduate qualifications in information systems and education. In a long career combining research, teaching and consultancy, Ray has worked in countries such as Australia, the UK, Turkey, and Malaysia. His current research interests include the use of cognitive and learning profiles in virtual environment design and to enhance student autonomy and self actualization in the learning process.
Index
A
course-level report generation 259
a-didactical situations 135 active learning, ICT-based 235 active learning approach 232–249 activity nodes 202 analytic hierarchy process (AHP) 117 anonymity mode 238 ant colony optimization (ACO) 200 artificial intelligence techniques 149–172 asynchronous Javascript and XML (AJAX), new look 216
D
B blended learning 89, 134 blended learning course on communication 40
C clustering analysis 186 CoFIND 207 cognitive profiles 3 collaborative filtering 206 collaborative sequencing 200 competitive learning 236 computer-assisted assessment (CAA) 280, 311 computer-based assessment system (CBA) 310 computer-provided feedback 282 computer aided instruction (CAI) 86 computer attitude scale (CAS) 99 computer supported collaborative learning (CSCL) 234 constraint programming (CP) 155 constructivism, as a goal 50 contingent plan 154 cooperative learning 136
data mining scheme 180 data preprocessing 177 diplomas and certificate generation 259 distance mode 238 domain theory 153
E e-assessment, information gathering techniques 305 e-facilitators, quality assessment 317–327 e-facilitators quality assessment, issues 320 e-learning, accessibility 119 e-learning, components 120 e-learning, designing an online assessment 300–316 e-learning, failure w/o a method 47 e-learning, formative online assessment 279– 299 e-learning, Heideggerian view 30–45 e-learning, online interaction 123 e-learning, quality evaluation 331 e-learning, quality in 330 e-learning, student satisfaction 119 e-learning, training teachers 83–95 E-Learning 2.0 213–231 E-Learning 2.0, best practices 223 E-Learning 2.0, reaching classrooms 219 E-Learning 2.0, tools and challenges 220 e-learning practices, contextual factors 102 e-learning practices, institutional factors 96–111 e-learning research, institutional context 101
Copyright © 2008, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
Index
e-learning tools, automated processes 152 e-learning value, stakeholder perspectives 114 e-learning value, student experiences 112–131 e-mentoring 61–82 e-mentoring, a European perspective 63 e-mentoring, best practice 69 e-mentoring, differences from face-to-face 67–76 e-mentoring, issues 64 e-portfolio, strategic instrument of assessment 271 e-portfolio, strategic use 264–278 e-portfolio of "international economic relations" 268 e-Qual 328–348 e-quality assessment matrix (e-QAM) 321 e-quality learning standards 334 e-Qual model 336 education of man as a citizen 55 effective learning 264–278
F face-to-face mode 238 flexible lifelong learning 85 flexible student alignment (FSA) 10 formative assessment concept 280 formative online assessment, design of activities 285 formative online assessment, strategies 288 formative vs. summative assessment 280
G General Agreement on Trade in Services (GATS) 116 graphical user interface (GUI) 259 Graphplan planner 154 Greek Paideia 53–56 Growltiger 40
H Heideggerian view on e-learning 30–45 Heidegger philosophy 31 heuristic search planner (HSP) 154 Homeric hero 54 HTN planner 154
I ICT-based active learning 235 ICT limits, different learning 38 ICT use 240 independent component analysis (ICA) 174, 180 individual learning 134 InnoeLearning 336 institutional conventions 103 institutional factors, formation of e-learning practices 96–111 institutional standards 103 instructional style 98 IPSS final solution 165 IPSS integration 159
K knowledge discovery, from e-learning activities 173–198 knowledge discovery in databases (KDD) 175 knowledge presenting 104
L Learning at University unit 20 learning complexity 7 learning design 90 learning design (LD) 150 learning design (LD) information model 151 learning domains, representation formalisms 151 learning environment, definition 3 learning management system (LMS) x– xii, 250–263 learning management systems (LMS) 86, 96, 305 learning networks 202 learning opportunities 138 learning outcomes 90 learning strategy, choosing 239 learning styles 240 learning styles, classifications 234 learning technology (LT) 98 learning technology use, influencing factors 97 linear vector quantization (LVQ) 185
Index
M Markov decision process (MDP) 154 mathematical language, pragmatic view 138 mathematics education, integrating technology 132–148 mathematics education, research 133 mentoring, multifaceted/elusive concept 62 MentorNet 65 metrodology assessment, aims 302 metrodology assessment, criteria and indicators 303 metrodology assessment, in e-learning 302 mining decision rules 188 Moodle, development model 254 Moodle, error in the grade system 256 Moodle, learning management system platform 252 motivation, learning strategy 239
N neural networks 185
O Odyssey 54 online adaptive assessment 288 online assessment 283 online assessment, effectiveness 284 online assessment, pros and cons 283 online assessment tool, creating 285 online collaborative assessment 289 online training methodology, building 46–60 open and distance learning (Meca-ODL), guide 335 open source LMS customization 250–263
P paraschool system 201 participatory design 12 participatory methodology 19 personalized e-learning environment (PELE) 4 personalized e-learning environment (PELE), the need for 6 personalized e-learning environments (PELE) 1–29
personalized e-learning environments (PELE), a systems perspective 8 personal teaching 134 phenomenology of learning a skill 33 pheromone 200 planning techniques 153 platform-level report generation 257 portfolio assessment 290 principal component analysis (PCA) 182 professional, continuous, and corporate education (PCCE) 30
Q QUEST, a mixed competitive-collaborative solution 242 Quest Environment for Self-managed Training (QUEST) 234
R reflective and participatory approach to design (RAPAD) 1–29 reflective and participatory approach to design (RAPAD), a systems perspective 8 reflective and participatory approach to design (RAPAD), definition 2 reflective and participatory approach to design (RAPAD), development 15 reflective and participatory approach to design (RAPAD), student engagement 14 regression analysis 185 representational state transfer (REST) 216
S SAT-based planner 154 scheduling techniques 154 Scottish centre for research into online learning and assessment (SCROLLA) 310 self-assessment and learning, case study 291 semiosis 141 semiotic representation systems 137 shareable courseware object reference model (SCORM) 152 SIT 204 situation learning (SL) 152 social learning 51
Socrates 55 Sophists 55 student-focused model, the excuse 51 student as learning system (SLS) 8, 10 student learning process, monitoring 149–172 sustainable environment for the evaluation of quality in e-learning (SEEQUEL) 329, 334 swarm-based techniques, in e-learning 199– 212
T TANGOW 156 TANGOW, authoring a course 157 TANGOW courses, case study 159 TANGOW courses, monitoring 163 TANGOW logs 158 teacher training programs, developing 87 teaching opportunities 138 telematic environments 232–249 total order (TO) planner 154 total quality control (TQC) 318 total quality management (TQM) 318 total quality management (TQM) in higher education 319 training teachers, a case study 88 training teachers in evolving educational contexts 87 tri-dimensional relationship 135
V virtual learning environments (VLE) 150
W Web 2.0 214 Web 2.0, key technologies 215 Web mining 175 Web revolution 214