Student Satisfaction and Learning Outcomes in E-Learning: An Introduction to Empirical Research Sean B. Eom Southeast Missouri State University, USA J.B. Arbaugh University of Wisconsin Oshkosh, USA
Senior Editorial Director: Director of Book Publications: Editorial Director: Acquisitions Editor: Development Editor: Production Editor: Typesetters: Print Coordinator: Cover Design:
Kristin Klinger Julia Mosemann Lindsay Johnston Erika Carter Hannah Ablebeck Sean Woznicki Natalie Pronio, Jennifer Romanchak, Milan Vracarich Jr. and Keith Glazewski Jamie Snavely Nick Newcomer
Published in the United States of America by Information Science Reference (an imprint of IGI Global) 701 E. Chocolate Avenue Hershey PA 17033 Tel: 717-533-8845 Fax: 717-533-8661 E-mail:
[email protected] Web site: http://www.igi-global.com/reference Copyright © 2011 by IGI Global. All rights reserved. No part of this publication may be reproduced, stored or distributed in any form or by any means, electronic or mechanical, including photocopying, without written permission from the publisher. Product or company names used in this set are for identification purposes only. Inclusion of the names of the products or companies does not indicate a claim of ownership by IGI Global of the trademark or registered trademark. Library of Congress Cataloging-in-Publication Data Student satisfaction and learning outcomes in e-learning: an introduction to empirical research / S. Eom and J.B. Arbaugh, editors. p. cm. Includes bibliographical references and index. Summary: “This book familiarizes prospective researchers with processes and topics for conducting research in e-learning, addressing Theoretical Frameworks, Empirical Research Methods and Tutorial, Factors Influencing Student Satisfaction and Learning Outcomes, and Other Applications of Theory and Method”--Provided by publisher. ISBN 978-1-60960-615-2 (hardcover) -- ISBN 978-1-60960-616-9 (ebook) 1. Computer-assisted instruction. 2. Distance education. 3. Learning, Psychology of. I. Eom, Sean B. II. Arbaugh, J. B., 1962LB1044.87.S848 2011 371.33’44678--dc22 2010040622
British Cataloguing in Publication Data A Cataloguing in Publication record for this book is available from the British Library. All work contributed to this book is new, previously-unpublished material. The views expressed in this book are those of the authors, but not necessarily of the publisher.
Table of Contents
Preface . ............................................................................................................................................... xiv Section 1 Theoretical Frameworks Chapter 1 Multi-Disciplinary Studies in Online Business Education: Observations, Future Directions, and Extensions........................................................................................................... 1 J. B. Arbaugh, University of Wisconsin Oshkosh, USA Chapter 2 Learning and Satisfaction in Online Communities of Inquiry............................................................... 23 Zehra Akyol, Canada D. Randy Garrison, University of Calgary, Canada
Section 2 Empirical Research Methods and Tutorial Chapter 3 A Review of Research Methods in Online and Blended Business Education: 2000-2009.................... 37 J. B. Arbaugh, University of Wisconsin Oshkosh, USA Alvin Hwang, Pace University, USA Birgit Leisen Pollack, University of Wisconsin Oshkosh, USA Chapter 4 An Introduction to Path Analysis Modeling Using LISREL ................................................................ 57 Sean B. Eom, Southeast Missouri State University, USA Chapter 5 Testing the DeLone-McLean Model of Information System Success in an E-Learning Context......... 82 Sean B. Eom, Southeast Missouri State University, USA James Stapleton, Southeast Missouri State University, USA
Chapter 6 An Introduction to Structural Equation Modeling (SEM) and the Partial Least Squares (PLS) Methodology........................................................................................... 110 Nicholas J. Ashill, American University of Sharjah, United Arab Emirates Chapter 7 Using Experimental Research to Investigate Students’ Statistfaction with Online Learning.............. 130 Art W. Bangert, Montana State University, USA Chapter 8 Student Performance in E-Learning Environments: An Empirical Analysis Through Data-Mining.......................................................................................................................... 149 Constanta-Nicoleta Bodea, Academy of Economic Studies, Romania Vasile Bodea, Academy of Economic Studies, Romania Ion Gh. Roşca, Academy of Economic Studies, Romania Radu Mogos, Academy of Economic Studies, Romania Maria-Iuliana Dascalu, Academy of Economic Studies, Romania Chapter 9 How to Design, Develop, and Deliver Successful E-Learning Initiatives........................................... 195 Clyde Holsapple, University of Kentucky, USA Anita Lee-Post, University of Kentucky, USA Section 3 Factors Influencing Student Satisfaction and Learning Outcomes Chapter 10 Quality Assurance in E-Learning......................................................................................................... 231 Stacey McCroskey, Online Adjunct Professor, USA Jamison V. Kovach, University of Houston, USA Xin (David) Ding, University of Houston, USA Susan Miertschin, University of Houston, USA Sharon Lund O’Neil, University of Houston, USA Chapter 11 Measuring Success in a Synchronous Virtual Classroom.................................................................... 249 Florence Martin, University of North Carolina Wilmington, USA Michele A. Parker, University of North Carolina Wilmington, USA Abdou Ndoye, University of North Carolina Wilmington, USA Chapter 12 Factors Influencing User Satisfaction with Internet-Based E-Learning in Corporate South Africa............267 Craig Cadenhead, University of Cape Town, South Africa Jean-Paul Van Belle, University of Cape Town, South Africa
Chapter 13 Student Personality and Learning Outcomes in E-Learning: An Introduction to Empirical Research.............................................................................................................................. 294 Eyong B. Kim, University of Hartford, USA Chapter 14 A Method for Adapting Learning Objects to Students’ Preferences.................................................... 316 Ana Sanz Esteban, University Carlos III of Madrid, Spain Javier Saldaña Ramos, University Carlos III of Madrid, Spain Antonio de Amescua Seco, University Carlos III of Madrid, Spain
Section 4 Other Applications of Theory and Method Chapter 15 Understanding Graduate Students’ Intended Use of Distance Education Platforms........................... 340 María del Carmen Jiménez-Munguía, Universidad de las Américas Puebla, México Luis Felipe Luna-Reyes, Universidad de las Américas Puebla, México Chapter 16 Online Project-Based Learning: Students’ Views, Concerns and Suggestions.................................... 357 Erman Yukselturk, Middle East Technical University, Turkey Meltem Huri Baturay, Kırıkkale University, Turkey Chapter 17 Students’ Perception, Interaction, and Satisfaction in the Interactive Blended Courses: A Case Study........................................................................................................................................ 375 Bünyamin Atici, Firat University, Turkey Yalın Kılıç Türel, Firat University, Turkey Compilation of References ............................................................................................................... 392 About the Contributors .................................................................................................................... 430 Index.................................................................................................................................................... 438
Detailed Table of Contents
Preface . ............................................................................................................................................... xiv Section 1 Theoretical Frameworks The first section, Theoretical Frameworks, introduces readers to emerging methodological and theoretical perspectives for effective empirical e-learning research. The two chapters in the book’s first section present a case for increased use of multi-course, multi-disciplinary studies and provide an overview and application of an increasingly influential model of e-learning effectiveness, the Community of Inquiry framework. Chapter 1 Multi-Disciplinary Studies in Online Business Education: Observations, Future Directions, and Extensions........................................................................................................... 1 J. B. Arbaugh, University of Wisconsin Oshkosh, USA This chapter argues that research in online teaching and learning in higher education should take a multi-disciplinary orientation, especially in settings whose curricula are drawn from several disciplinary perspectives such as business schools. The benefits of a multi-disciplinary approach include curriculum integration and enhanced communication and collective methodological advancement among online teaching and learning scholars from the disciplines that comprise the integrated curricula. After reviewing multi-disciplinary studies in business education published to date, the chapter concludes with recommendations for advancing research in this emerging stream. Some of the primary recommendations include the use of academic discipline as a moderating variable; more studies incorporate samples comprised of faculty and/or undergraduate students, and the development of more comprehensive measures of student learning. Chapter 2 Learning and Satisfaction in Online Communities of Inquiry............................................................... 23 Zehra Akyol, Canada D. Randy Garrison, University of Calgary, Canada
The purpose of this chapter is to explain the capability of the Community of Inquiry (CoI) framework as a research model to study student learning and satisfaction. The framework identifies three elements (social, cognitive and teaching presence) that contribute directly to the success of an e-learning experience through the development of an effective CoI. It is argued that a CoI leads to higher learning and increased satisfaction. The chapter presents findings from two online courses designed using the CoI approach. Overall, the students in these courses had high levels of perceived learning and satisfaction as well as actual learning outcomes.
Section 2 Empirical Research Methods and Tutorial The second section of the book is titled Empirical Research Methods and Tutorial. Because empirical research in e-learning is our topic of interest, it seems particularly appropriate that research methods are the focus of the book’s second section. After a review of research methods employed to date in a relatively active discipline, the book’s second section chronicles and provides examples of several of the structural equation modeling techniques whose increased use was called for in the review chapter. This section also includes chapters that deal with higher order multivariate techniques, experimental designs, data mining, and action research in furthering our understanding of e-learning success.
Chapter 3 A Review of Research Methods in Online and Blended Business Education: 2000-2009.................... 37 J. B. Arbaugh, University of Wisconsin Oshkosh, USA Alvin Hwang, Pace University, USA Birgit Leisen Pollack, University of Wisconsin Oshkosh, USA This review of the online teaching and learning literature in business education found growing sophistication in analytical approaches over the last 10 years. We believe researchers are uncovering important findings from the large number of predictors, control variables, and criterion variables examined. Scholars are employing appropriate and increasingly sophisticated techniques such as structural equation models in recent studies (16) within a field setting. To increase methodological rigor, researchers need to consciously incorporate control variables that are known to influence criterion variables of interest so as to clearly partial out the influence of their predictor variables of interest. This will help address shortcomings arising from the inability to convince sample respondents such as instructors, institutional administrators, and graduate business students on the benefits versus the cost of a fully randomized design approach. Chapter 4 An Introduction to Path Analysis Modeling Using LISREL ................................................................ 57 Sean B. Eom, Southeast Missouri State University, USA
Over the past decades, we have seen a wide range of empirical research in the e-learning literature. The use of multivariate statistical tools has been a staple of the research stream throughout the decade. Path analysis modeling is part of four related multivariate statistical models including regression, path analysis, confirmatory factor analysis, and structural equation models. This chapter focuses on path analysis modeling for beginners using LISREL 8.70. Several topics covered in this chapter include foundational concepts, assumptions, and steps of path analysis modeling. The major steps in path analysis modeling explained in this chapter consist of specification, identification, estimation, testing, and modification of models. Chapter 5 Testing the DeLone-McLean Model of Information System Success in an E-Learning Context......... 82 Sean B. Eom, Southeast Missouri State University, USA James Stapleton, Southeast Missouri State University, USA This chapter has two important objectives (a) introduction of structural equation modeling for a beginner; and (b) empirical testing of the validity of the information system (IS) success model of DeLone and McLean (the DM model) in an e-learning environment, using LISREL based structural equation modeling. The following section briefly describes the prior literature on course delivery technologies and e-learning success. The next section presents the research model tested and discussion of the survey instrument. The structural equation modeling process is fully discussed including specification, identification, estimation, testing, and modification of the model. The final section summarizes the test results. To build e-learning theories, those untested conceptual frameworks must be tested and refined. Nevertheless, there has been very little testing of these frameworks. This chapter is concerned with the testing of one such framework. There is abundant prior research that examines the relationships among information quality, system quality, system use, user satisfaction, and system outcomes. This is the first study that focuses on the testing of the DM model in an e-learning context. Chapter 6 An Introduction to Structural Equation Modeling (SEM) and the Partial Least Squares (PLS) Methodology........................................................................................... 110 Nicholas J. Ashill, American University of Sharjah, United Arab Emirates Over the past 15 years, the use of Partial Least Squares (PLS) in academic research has enjoyed increasing popularity in many social sciences including Information Systems, marketing, and organizational behavior. PLS can be considered an alternative to covariance-based SEM and has greater flexibility in handling various modeling problems in situations where it is difficult to meet the hard assumptions of more traditional multivariate statistics. This chapter focuses on PLS for beginners. Several topics are covered and include foundational concepts in SEM, the statistical assumptions of PLS, a LISREL-PLS comparison, and reflective and formative measurement. Chapter 7 Using Experimental Research to Investigate Students’ Statistfaction with Online Learning.............. 130 Art W. Bangert, Montana State University, USA
The use of experimental research in higher education settings for investigating the effectiveness of technology-supported instructional innovations in K-12 and higher education settings is fairly limited. The implementation of the No Child Left Behind Act (NCLB) of 2001, has renewed the emphasis on the use of experimental research for establishing evidence to support the effectiveness of instructional interventions and other school-based programs in K-12 and higher education contexts. This chapter discusses the most common experimental designs and threats to internal validity of experimental procedures that must be controlled to ensure that the interventions or programs under investigation are responsible for changes in the dependent variables of interest. A study by Bangert (2008) is used to illustrate procedures for conducting experimental research, controlling potential threats to internal validity and reporting results that communicate both practical and statistical significance. Chapter 8 Student Performance in E-Learning Environments: An Empirical Analysis Through Data-Mining.......................................................................................................................... 149 Constanta-Nicoleta Bodea, Academy of Economic Studies, Romania Vasile Bodea, Academy of Economic Studies, Romania Ion Gh. Roşca, Academy of Economic Studies, Romania Radu Mogos, Academy of Economic Studies, Romania Maria-Iuliana Dascalu, Academy of Economic Studies, Romania The aim of this chapter is to explore the application of data mining for analyzing performance and satisfaction of the students enrolled in an online two-year master degree programme in project management. This programme is delivered by the Academy of Economic Studies, the biggest Romanian university in economics and business administration in parallel, as an online programme and as a traditional one. The main data sources for the mining process are the survey made for gathering students’ opinions, the operational database with the students’ records and data regarding students activities recorded by the e-learning platform. More than 180 students have responded and more than 150 distinct characteristics/ variable per student were identified. Due the large number of variables, data mining is a recommended approach to analysis this data. Clustering, classification, and association rules were employed in order to identify the factor explaining students’ performance and satisfaction, and the relationship between them. The results are very encouraging and suggest several future developments. Chapter 9 How to Design, Develop, and Deliver Successful E-Learning Initiatives........................................... 195 Clyde Holsapple, University of Kentucky, USA Anita Lee-Post, University of Kentucky, USA The purposes of this chapter are three-fold: (1) to present findings in investigating the success factors for designing, developing and delivering e-learning initiatives, (2) to examine the applicability of Information Systems theories to study e-learning success, and (3) to demonstrate the usefulness of action research in furthering understanding of e-learning success. Inspired by issues and challenges experienced in developing an online course, a process approach for measuring and assessing e-learning success is advanced. This approach adopts an Information Systems perspective on e-learning success to address the question of how to guide the design, development, and delivery of successful e-learning initiatives.
The validity and applicability of the process approach to measuring and assessing e-learning success is demonstrated in empirical studies involving cycles of action research. Merits of this approach are discussed, and its contributions in paving the way for further research opportunities are presented. Section 3 Factors Influencing Student Satisfaction and Learning Outcomes The third section of the book examines particular influences of e-learning course outcomes in a variety of settings. These chapters examine factors such as learner dispositional and behavioral characteristics, quality assurance frameworks for e-learning effectiveness, course content design and development, and their roles in shaping effective e-learning environments. To date, much of e-learning research has focused on asynchronous learning environments (exemplified by the Journal of Asynchronous Learning Networks) based in higher education settings. However, these are not the only contexts in which e-learning occurs. Therefore, we also address the alternative and potentially increasingly important settings of synchronous course delivery and corporate learning environments in this section. Chapter 10 Quality Assurance in E-Learning......................................................................................................... 231 Stacey McCroskey, Online Adjunct Professor, USA Jamison V. Kovach, University of Houston, USA Xin (David) Ding, University of Houston, USA Susan Miertschin, University of Houston, USA Sharon Lund O’Neil, University of Houston, USA Quality is a subjective concept, and as such, there are many criteria for assuring quality, including assessment practices based on industry standards and accreditation requirements. Most assessments, including quality assurance in e-learning, frequently occur at three levels: individual course assessments, department or program assessments, and institutional assessments; frequently these levels cannot be distinctly delineated. While student evaluations are usually included within these frameworks, student views are but one variable in the quality assessment equation. To offer some plausible perspectives of how students view quality, this chapter will provide an overview of quality assurance for online learning from the course, program, and institutional viewpoints as well as review some of the key research related to students’ assessment of what constitutes quality in online courses. Chapter 11 Measuring Success in a Synchronous Virtual Classroom.................................................................... 249 Florence Martin, University of North Carolina Wilmington, USA Michele A. Parker, University of North Carolina Wilmington, USA Abdou Ndoye, University of North Carolina Wilmington, USA This chapter will benefit those who teach individuals using the synchronous virtual classroom (SVC). The SVC model will help instructors design online courses that incorporate the factors that students need to be successful. This model will also help virtual classroom instructors and managers develop
a systematic way of identifying and addressing the external and internal factors that might impact the success of their instruction. The strategies for empirically researching the SVC, which range from qualitative inquiry to experimental design, are discussed along with practical examples. This information will benefit instructors, researchers, non-profit and profit organizations, and academia. Chapter 12 Factors Influencing User Satisfaction with Internet-Based E-Learning in Corporate South Africa............267 Craig Cadenhead, University of Cape Town, South Africa Jean-Paul Van Belle, University of Cape Town, South Africa This article looks at the factors that influence user satisfaction with Internet based learning in the South African corporate environment. An electronic survey was administered, and one hundred and twenty responses from corporations across South Africa were received. Only five of the thirteen factors were found to exert a statistically significant influence on learner satisfaction: instructor response towards the learners, instructor attitude toward Internet based learning, the flexibility of the course, perceived usefulness, perceived ease of use, and the social interaction experienced by the learner in assessments. Interestingly, four of those five were also identified as significant in a similar Taiwanese study, which provides an interesting cross-cultural validation for the findings, even though our sample was different and smaller. Perhaps surprisingly, none of 6 demographic variables exerted significant influence. Hopefully organisations and educational institutions can note and make use of the important factors in conceptualizing and designing their e-learning courses.
Chapter 13 Student Personality and Learning Outcomes in E-Learning: An Introduction to Empirical Research.............................................................................................................................. 294 Eyong B. Kim, University of Hartford, USA Web-based courses are a popular format in the e-learning environment. Among students enrolled in Web-based courses, some students learn a lot while others do not. There are many possible reasons for the differences in learning outcomes (e.g., student’s learning style, satisfaction, motivation, etc.). In the last few decades, student’s personality has emerged as an important factor influencing the learning outcomes in a traditional classroom environment. Among different personality models, the Big-Five model of personality has been successfully applied to help understand the relationship between personality and learning outcomes. Because Web-based courses are becoming popular, the Big-Five model is applied to find out if students’ personality traits play an important role in a Web-based course learning outcomes. Chapter 14 A Method for Adapting Learning Objects to Students’ Preferences.................................................... 316 Ana Sanz Esteban, University Carlos III of Madrid, Spain Javier Saldaña Ramos, University Carlos III of Madrid, Spain Antonio de Amescua Seco, University Carlos III of Madrid, Spain
The development of information and communications technologies (ICT) in recent years has led to new forms of education, and consequently, e-learning systems. Several learning theories and styles define learning in different ways. This chapter analyzes these different learning theories and styles, as well as the main standards for creating contents with the goal of developing a proposal for structuring courses and organizing material which best fits students’ needs, in order to increase motivation and improve the learning process.
Section 4 Other Applications of Theory and Method The fourth section of the book includes three chapters that deal with other applications of e-learning theory and method. The book’s final section extends the approach of alternative e-learning theory and environments through applying the Unified Theory of Acceptance and Use of Technology (UTAUT), project-based learning, and blended learning. Chapter 15 Understanding Graduate Students’ Intended Use of Distance Education Platforms........................... 340 María del Carmen Jiménez-Munguía, Universidad de las Américas Puebla, México Luis Felipe Luna-Reyes, Universidad de las Américas Puebla, México The objective of this chapter is to use the Unified Theory of the Acceptance and Use of Technology to better understand graduate students’ intended use of distance education Platforms, using as a case a distance education platform of a Mexican University, the SERUDLAP system. Four constructs are hypothesized to play a significant role: performance expectancy, effort expectancy, social influence and attitude toward using technology; the moderating factors were gender and voluntariness of use. Data for the study was gathered through an online survey with a response rate of about 41%. Results suggested that the performance expectancy and attitude towards technology are factors that help us understand graduate students’ intended use of a distance education platform. Future research must consider the impact of some factors, such as previous experiences, age and facilitating conditions in order to better understand the students’ behavior. Chapter 16 Online Project-Based Learning: Students’ Views, Concerns and Suggestions.................................... 357 Erman Yukselturk, Middle East Technical University, Turkey Meltem Huri Baturay, Kırıkkale University, Turkey This study integrated Project-based learning (PBL) in an online environment and aimed to investigate critical issues, dynamics and challenges related to PBL from 49 student perspectives in an online course. The effect of PBL was examined qualitatively with open-ended questionnaire, observations, and students’ submissions who were taking an online certificate course. According to the findings, students thought that an online PBL course supports their professional development with provision of practical knowledge, enhanced project development skill, self confidence, and research capability. This support is further augmented with the facilities of the online learning environment. Students
mainly preferred team-work more than individual work. Although students were mostly satisfied with the course, they still had some suggestions for prospective students and instructors. The findings are particularly important for those people who are planning to organize course or activities which involve online PBL and who are about take an online or face-to-face PBL course. Chapter 17 Students’ Perception, Interaction, and Satisfaction in the Interactive Blended Courses: A Case Study........................................................................................................................................ 375 Bünyamin Atici, Firat University, Turkey Yalın Kılıç Türel, Firat University, Turkey Blended courses that offer several possibilities to students and teachers such as becoming more interactive and more active have become increasingly widespread for both K12 and higher education settings. With the rising of the cutting edge technologies, institutions and instructors have embarked on creating new learning environments with a variety of new delivery methods. At the same time, designing visually impressive and attractive blended settings for students have been easier with extensive learning and content management systems (LMS, CMS, LCMS) such as Blackboard, WebCT, Moodle, and virtual classroom environments (VLE) such as Adobe Connect, Dimdim, and WiZiQ. In this study, we aimed to investigate students’ perspectives and satisfactions towards designed interactive blended learning settings and to find out the students’ views on both synchronous and asynchronous interactive blended learning environment (IBLE). Compilation of References ............................................................................................................... 392 About the Contributors .................................................................................................................... 430 Index.................................................................................................................................................... 438
xiv
Preface
From its humble origins approximately 30 years ago (Hiltz & Turoff, 1978), it is possible that we are now entering what may be a golden age of e-learning. In the United States, 4.6 million students took at least one online course during Fall 2008, a seventeen percent increase from the previous year. U.S. schools offering these courses have seen increases in demand for e-learning options, with 66 percent and 73 percent of responding schools reporting increased demand for new and existing online course offerings respectively (Allen & Seaman, 2010). Similar reactions to e-learning are occurring across the globe. The European Union’s Lifelong Learning Programme will be investing much of its 7 Billion Euro budget between 2007 and 2013 in the development of and enhancement of e-learning tools and open collaboration initiatives (European Commission, 2010). Institutions such as Ramkamhaeng University in Thailand, the Indira Ghandi National Open University in India and the Open University of Malaysia are adopting e-learning to help manage enrollments approaching 2 million students (Bonk, 2009). With e-learning beginning to expand beyond its historic roots in higher education to K-12 educational settings and populous nations such China, India, and Indonesia only beginning to embrace e-learning, it appears that the demand for online learning will only increase in the future, and likely increase dramatically. But in spite of such potential promise for e-learning, support for delivering education through this medium is far from unanimous. Empirical studies suggest that online education is not a universal innovation applicable to all types of instructional situations. Online education can be a superior mode of instruction if it is targeted to learners with specific learning styles (visual and read/write learning styles) (Eom, Ashill, & Wen, 2006), students personality characteristics (Schniederjans & Kim, 2005), and with timely, helpful instructor feedback of various types. Although cognitive and diagnostics feedbacks are all important factors that improve perceived learning outcomes, metacognitive feedback can induce students to become self-regulated learners. Recent meta-analytic studies (Means, Toyama, Murphy, Bakia, & Jones, 2009; Sitzmann, Kraiger, Stewart, & Wisher, 2006) also suggest that learning outcomes now equal, and in some cases, surpass those provided in classroom-based settings. However, concerns regarding this delivery medium’s effectiveness continue to persist (Morgan & Adams, 2009; Sarker & Nicholson, 2005). Some question its appropriateness for the delivery of technically-oriented or skills-based content (Hayes, 2007; Kellogg & Smith, 2009; Marriott, Marriott, & Selwyn, 2004). Others bemoan a lack of direct contact between students and instructors (Haytko, 2001; Tanner, Noser, & Totaro, 2009; Wilkes, Simon, & Brooks, 2006). Still others associate the medium with for-profit universities, and therefore lump its use in with the practices of low standards and high-pressure marketing associated with some of those types of institutions (Bonk, 2009; Stahl, 2004). Still others believe that although the technology itself may be neither good nor bad, the bad or even non-existent training that many of those employed to teach using the medium likely guarantees a poor educational experience for learners and changes the
xv
learner-instructor relationship in ways that are not always positive (Alexander, Perrault, Zhao, & Waldman, 2009; Kellogg & Smith, 2009; Liu, Magjuka, Bonk, & Lee, 2007).
THE OBJECTIVE OF THIS BOOK One way to address such concerns is through researching the phenomenon to determine whether and under what conditions the use of the medium is most effective. However, concerns regarding the quality of research of e-learning have long persisted. From concerns such as over-reliance upon single-course studies (Phipps & Merisotis, 1999), to lack of randomized experimental designs (Bernard, Abrami, Lou, & Borokhovski, 2004), to incomplete and/or imprecise measures of student learning (Sitzmann, Ely, Brown, & Bauer, 2010) to more general concerns over methodological quality (Bernard et al., 2009), concerns regarding the rigor of research on e-learning are not new. These concerns regarding research quality are augmented by the trend that although the number of e-learning instructors continues to increase, the number of scholars providing a sustained history of research contributions on the topic has been comparatively few. For example, a recent review of the literature on e-learning in business education reported that fewer than 20 scholars were intensively contributing to this literature (three or more articles), and this number was enhanced in part because these scholars were collaborating with each other (Arbaugh et al., 2009). If such a distorted picture of dedicated e-learning researchers relative to e-learning educational practitioners exists in other disciplines, it is evident that we would greatly benefit from substantially increasing the number of researchers dedicated to examining this increasingly pervasive phenomenon.
THE AUDIENCE OF THIS BOOK This book is for practitioners, managers, researchers, and graduate students in virtually every field of study. Application areas of e-learning are not limited to a specific academic area. E-learning is a worldwide perpetual trend that is being applied to educate employees of non-academic organizations such as governments, profit or non-profit organizations. Needless to say, libraries in university, profit and non-profit organizations around the world will be potential customers. Therefore, we have produced a book that will help introduce these instructors, researchers, practicing managers, and graduate students in the e-learning community to research on satisfaction and learning outcomes in e-learning. Besides providing insights from previous research on effective instructional practices for new instructors (who, in turn, could be new researchers) that are entering the e-learning realm, why not a book that might help them examine and conduct their work more thoroughly? It is our hope that new (and not so new) instructors, researchers, practicing managers, and graduate students will use the materials in this book to enter the increasingly fascinating field of research in online teaching and learning.
THE CONTRIBUTORS OF THIS BOOK In compiling this book’s contents, we are particularly pleased that we have both a multi-national and a multi-disciplinary composition of contributors of the book’s chapters. We have authors from institutions
xvi
in Canada, Mexico, Romania, South Africa, Spain, Turkey, the United Arab Emirates, and the United States. These scholars represent fields such as adult education, computer science, distance education, economics, educational leadership, Information Systems, instructional technology, international management, marketing, and strategy. Considering the diverse backgrounds from which theoretical and methodological perspectives used to develop e-learning research, we feel that incorporating the works of scholars from varied backgrounds not only informs the reader of the breadth of research conducted in this emerging field, but also affords the chapter authors the opportunity to bring the perspectives of this collection of works back to inform scholars in their respective disciplines.
THE STRUCTURE OF THIS BOOK When one seeks to enter a new research field, familiarizing oneself with some of that field’s influential articles is a necessary starting point. However, being able to see those articles in the broader context of the field’s predominant theoretical and methodological influences and potential future directions can help scholars to determine where their expertise and skills can make the most appropriate contribution. Therefore, we organized the book’s chapters to familiarize prospective researchers with processes and topics for conducting research in e-learning. The book is divided into 4 sections: Theoretical Frameworks, Empirical Research Methods and Tutorial, Factors Influencing Student Satisfaction and Learning Outcomes, and Other Applications of Theory and Method. The first section, Theoretical Frameworks, introduces readers to emerging methodological and theoretical perspectives for effective empirical e-learning research. The two chapters in the book’s first section present a case for increased use of multi-course, multi-disciplinary studies and provide an overview and application of an increasingly influential model of e-learning effectiveness, the Community of Inquiry framework (Garrison, Anderson, & Archer, 2000). In chapter 1, Arbaugh argues that research in online teaching and learning in higher education should take a multi-disciplinary orientation, especially in settings whose curricula are drawn from several disciplinary perspectives such as business schools. The benefits of a multi-disciplinary approach include curriculum integration and enhanced communication and collective methodological advancement among online teaching and learning scholars from the disciplines that comprise the integrated curricula. After reviewing multi-disciplinary studies in business education published to date, the chapter concludes with recommendations for advancing research in this emerging stream. Some of the primary recommendations include the use of academic discipline as a moderating variable, and more studies incorporate samples comprised of faculty and/or undergraduate students, and the development of more comprehensive measures of student learning. In chapter 2, Akyol and Garrison explain the capability of the Community of Inquiry (CoI) framework as a research model to study student learning and satisfaction. The framework identifies three elements (social, cognitive, and teaching presence) that contribute directly to the success of an e-learning experience through the development of an effective CoI. It is argued that a CoI leads to higher learning and increased satisfaction. The chapter presents findings from two online courses designed using the CoI approach. Overall, the students in these courses had high levels of perceived learning and satisfaction as well as actual learning outcomes. The second section of the book is titled Empirical Research Methods and Tutorial. Because empirical research in e-learning is our topic of interest, it seems particularly appropriate that research methods are the focus of the book’s second section. After a review of research methods employed to date in a
xvii
relatively active discipline, the book’s second section chronicles and provides examples of several of the structural equation modeling techniques whose increased use was called for in the review chapter. This section also include chapter that deals with higher order multivariate techniques, experimental designs, and data mining. The first chapter in this section, chapter 3, reviews the online teaching and learning literature in business education and it found growing sophistication in analytical approaches over the last 10 years. We believe researchers are uncovering important findings from the large number of predictors, control variables, and criterion variables examined. Scholars are employing appropriate and increasingly sophisticated techniques such as structural equation models in recent studies within a field setting. To increase methodological rigor, researchers need to consciously incorporate control variables that are known to influence criterion variables of interest so as to clearly partial out the influence of their predictor variables of interest. This will help address shortcomings arising from the inability to convince sample respondents such as instructors, institutional administrators, and graduate business students on the benefits versus the cost of a fully randomized design approach. Chapter 4 is an introduction to path analysis modeling using LISREL. Over the past decades, we have seen a wide range of empirical research in the e-learning literature. The use of multivariate statistical tools has been a staple of the research stream throughout the decade. Path analysis modeling is part of four related multivariate statistical models including regression, path analysis, confirmatory factor analysis, and structural equation models. This chapter focuses on path analysis modeling for beginners using LISREL 8.70. Several topics covered in this chapter include foundational concepts, assumptions, and steps of path analysis modeling. The major steps in path analysis modeling explained in this chapter consist of specification, identification, estimation, testing, and modification of models. Chapter 5, “Testing the DeLone-McLean Model of Information System Success in an E-Learning Context,” has two important objectives: (a) introduction of structural equation modeling for a beginner; and (b) empirical testing of the validity of the information system (IS) success model of DeLone and McLean (the DM model) in an e-learning environment using LISREL. The next section presents the research model tested and discussion of the survey instrument. The structural equation modeling process is fully discussed including specification, identification, estimation, testing, and modification of the model. The final section summarizes the test results. To build e-learning theories, those untested conceptual frameworks must be tested and refined. Nevertheless, there has been very little testing of these frameworks. This chapter is concerned with the testing of one such framework. There is abundant prior research that examines the relationships among information quality, system quality, system use, user satisfaction, and system outcomes. This is the first study that focuses on the testing of the DM model in an e-learning context. Chapter 6 is an introduction to Structural Equation Modeling (SEM) and the partial least squares (PLS) methodology. Over the past 15 years, the use of Partial Least Squares (PLS) in academic research has enjoyed increasing popularity in many social sciences including information systems, marketing, and organizational behavior. PLS can be considered an alternative to covariance-based SEM and has greater flexibility in handling various modeling problems in situations where it is difficult to meet the hard assumptions of more traditional multivariate statistics. This chapter focuses on PLS for beginners. Several topics are covered and include foundational concepts in SEM, the statistical assumptions of PLS, a LISREL-PLS comparison, and reflective and formative measurement. Chapter 7, “Using Experimental Research to Investigate Students’ Satisfaction with Online Learning,” discusses the most common experimental designs and threats to internal validity of experimental
xviii
procedures that must be controlled to ensure that the interventions or programs under investigation are responsible for changes in the dependent variables of interest. A study by Bangert (2008) is used to illustrate procedures for conducting experimental research, controlling potential threats to internal validity, and reporting results that communicate both practical and statistical significance. The use of experimental research in higher education settings for investigating the effectiveness of technology-supported instructional innovations in K-12 and higher education settings is fairly limited. The implementation of the No Child Left Behind Act (NCLB) of 2001, has renewed the emphasis on the use of experimental research for establishing evidence to support the effectiveness of instructional interventions and other school-based programs in K-12 and higher education contexts. Chapter 8 introduces a new approach of data mining as an empirical analysis tool for analyzing student performance in e-learning environments. The aim of this chapter is to explore the application of data mining for analyzing performance and satisfaction of the students enrolled in an online two-year Master degree programme in project management. This programme is delivered by the Academy of Economic Studies, the biggest Romanian university in economics and business administration in parallel, as an online programme and as a traditional one. The main data sources for the mining process are the survey made for gathering students’ opinions, the operational database with the students’ records and data regarding students activities recorded by the e-learning platform are. More than 180 students have responded and more than 150 distinct characteristics/ variable per student were identified. Due the large number of variables data mining is a recommended approach to analysis this data. Clustering, classification, and association rules were employed in order to identify the factor explaining students’ performance and satisfaction, and the relationship between them. The results are very encouraging and suggest several future developments. Chapter 9 is the last chapter of the second section of the book. The purposes of this chapter are three-fold: (1) to present our findings in investigating the success factors for designing, developing and delivering e-learning initiatives, (2) to examine the applicability of information systems theories to study e-learning success, and (3) to demonstrate the usefulness of action research in furthering our understanding of e-learning success. Inspired by issues and challenges experienced in developing an online course, a process approach for measuring and assessing e-learning success is advanced. This approach adopts an Information Systems perspective on e-learning success to address the question of how to guide the design, development, and delivery of successful e-learning initiatives. The validity and applicability of our process approach to measuring and assessing e-learning success is demonstrated in empirical studies involving cycles of action research. Merits of our approach are discussed and contributions in paving the way for further research opportunities are presented. The third section of the book is titled Factors Influencing Student Satisfaction and Learning Outcomes. After presenting prevailing theoretical and methodological perspectives, the book’s third section examines particular influences of e-learning course outcomes in a variety of settings. These chapters examine factors such as learner dispositional and behavioral characteristics, quality assurance frameworks for e-learning effectiveness, course content design and development, and their roles in shaping effective e-learning environments. To date, much of e-learning research has focused on asynchronous learning environments (exemplified by the Journal of Asynchronous Learning Networks) based in higher education settings. However, these are not the only contexts in which e-learning occurs. Therefore, we also address the alternative and potentially increasingly important settings of synchronous course delivery and corporate learning environments in this section.
xix
Chapter 10 is concerned with quality assurance in e-learning. Quality is a subjective concept, and as such, there are many criteria for assuring quality, including assessment practices based on industry standards and accreditation requirements. Most assessments, including quality assurance in e-learning, frequently occur at three levels: individual course assessments, department or program assessments, and institutional assessments; frequently these levels cannot be distinctly delineated. While student evaluations are usually included within these frameworks, student views are but one variable in the quality assessment equation. To offer some plausible perspectives of how students view quality, this chapter will provide an overview of quality assurance for online learning from the course, program, and institutional viewpoints as well as review some of the key research related to students’ assessment of what constitutes quality in online courses. Chapter 11 presents the synchronous virtual classroom (SVC) success model. The SVC model will help instructors design online courses that incorporate the factors that students need to be successful. This model will also help virtual classroom instructors and managers develop a systematic way of identifying and addressing the external and internal factors that might impact the success of their instruction. The strategies for empirically researching the SVC, which range from qualitative inquiry to experimental design, are discussed along with practical examples. This information will benefit instructors, researchers, non-profit and profit organizations, and academia. Chapter 12, “Factors Influencing User Satisfaction with Internet-Based E-Learning in Corporate South Africa,” examines the factors that influence user satisfaction with Internet based learning in the South African corporate environment. An electronic survey was administered and one hundred and twenty responses from corporations across South Africa were received. Only five of the thirteen factors were found to exert a statistically significant influence on learner satisfaction: instructor response towards the learners, instructor attitude toward Internet based learning, the flexibility of the course, perceived usefulness, perceived ease of use, and the social interaction experienced by the learner in assessments. Interestingly, four of those five were also identified as significant in a similar Taiwanese study, which provides an interesting cross-cultural validation for the findings, even though our sample was different and smaller. Perhaps surprisingly, none of 6 demographic variables exerted significant influence. Hopefully organisations and educational institutions can note and make use of the important factors in conceptualizing and designing their e-learning courses. Chapter 13 examines the relationships between student personality and e-learning outcomes. Among students enrolled in Web-based courses, some students learn a lot while others do not. There are many possible reasons for the differences in learning outcomes (e.g., student’s learning style, satisfaction, motivation, etc,). In the last few decades, student’s personality has emerged as an important factor influencing the learning outcomes in a traditional classroom environment. Among different personality models, the Big-Five model of personality has been successfully applied to help understand the relationship between personality and learning outcomes. Because Web-based courses are becoming popular, the Big-Five model is applied to find out if students’ personality traits play an important role in a Web-based course learning outcomes. Chapter 14, “A Method for Adapting Learning Objects to Students’ Preferences,” analyzes the different learning theories and styles, as well as the main standards for creating contents with the goal of developing a proposal for structuring courses and organizing material which best fits students’ needs, in order to increase motivation and improve the learning process. The objective of this chapter has been to analyze different factors that influence student learning. To identify different factors that influence student learning, it was necessary to review different learning theories and different learning styles.
xx
After that, the authors analyzed the role of teachers and their main responsibilities, and students’ learning process in order to propose a pedagogical structure for an e-learning course. The relevant roles that both teaching contents and e-learning play were also discussed. An active teacher who participles and creates high quality contents is necessary to prevent the sense of isolation, discouragement, and lack of motivation. Considering all these factors and the special features of each student as regards the way he learns, this chapter has proposed a new method that facilitates teaching and adapts knowledge to special preferences of each student. The fourth section of the book includes three chapters that deal with other applications of e-learning theory and method. The book’s final section extends the approach of alternative e-learning theory and environments through applying the Unified Theory of Acceptance and Use of Technology (UTAUT) of Venkatesh et al (2003), and blended learning. The objective of Chapter 15, “Understanding Graduate Students’ Intended Use of Distance Education Platforms,” is to apply the Unified Theory of the Acceptance and Use of Technology to better understand graduate students’ intended use of distance education platforms, using as a case a distance education platform of a Mexican University, the SERUDLAP system. Four constructs are hypothesized to play a significant role: performance expectancy, effort expectancy, social influence and attitude toward using technology; the moderating factors were gender and voluntariness of use. Data for the study was gathered through an on-line survey with a response rate of about 41%. Results suggested that the performance expectancy and attitude towards technology are factors that help us understand graduate students’ intended use of a distance education platform. Future research must consider the impact of some factors, such as previous experiences, age, and facilitating conditions in order to better understand the students’ behavior. Chapter 16 is concerned with the investigation of critical issues, dynamics and challenges related to project-based learning (PBL) from 49 student perspectives in an online course. The effect of PBL was examined qualitatively with open-ended questionnaire, observations and students’ submissions who were taking an online certificate course. According to the findings, students thought that an online PBL course supports their professional development with provision of practical knowledge, enhanced project development skill, self confidence, and research capability. This support is further augmented with the facilities of the online learning environment. Students mainly preferred team-work more than individual work. Although students were mostly satisfied with the course, they still had some suggestions for prospective students and instructors. The findings are particularly important for those people who are planning to organize course or activities which involve online PBL and who are about take an online or face-to-face PBL course. Chapter 17 is the last chapter of the book. This chapter is a case study that examines students’ perceptions, interaction and satisfaction in the interactive blended courses. Blended courses that offer several possibilities to students and teachers such as becoming more interactive and more active have become increasingly widespread for both K12 and higher education settings. With the rising of the cutting edge technologies, institutions and instructors have embarked on creating new learning environments with a variety of new delivery methods. At the same time, designing visually impressive and attractive blended settings for students have been easier with extensive learning and content management systems (LMS, CMS, LCMS) such as Blackboard, WebCT, Moodle, and virtual classroom environments (VLE) such as Adobe Connect, Dimdim, and WiZiQ. In this study, we aimed to investigate students’ perspectives and satisfactions towards designed interactive blended learning settings and to find out the students’ views on both synchronous and asynchronous interactive blended learning environment (IBLE).
xxi
THE FUTURE OF E-LEARNING RESEARCH Although this book examines numerous topics for which research has been conducted, there are several areas in which e-learning research is still in its infancy. To help steer prospective scholars in directions where they might provide immediate impact, we conclude this section with a brief discussion of some of these topics:
IMPACTS OF E-LEARNING BY AND ON INSTRUCTORS There have been numerous studies of student reactions to e-learning and potential predictors of effective learner-related outcomes. However, studies of the other primary participants in e-learning environments, the instructors, have been slow in coming. Fortunately, we are beginning to see studies that focus on instructor motivations and reactions to e-learning (i.e. (Connolly, Jones, & Jones, 2007; Coppola, Hiltz, & Rotter, 2002; Shea, 2007). As instructors continue to increase their knowledge and experience with e-teaching, there likely will be research opportunities for comparing attitudes, motivations, and behaviors of novice versus experienced online instructors. Other instructor-related research opportunities may include consideration of changes in workplace environments, interactions with students outside of class, working relationships with colleagues, and relationships to their host institutions.
POTENTIAL INFLUENCES OF ACADEMIC DISCIPLINES ON E-LEARNING EFFECTIVENESS Considering that many of the theoretical foundations of e-learning research have come from the communities of instructional technology, Information Systems, and educational theory, it is not surprising that the potential influence of epistemological, sociological, and behavioral characteristics of academic disciplines may play in shaping effective e-learning environments has received limited research attention to date. However, recent studies of disciplinary effects in e-learning suggest that they may have distinct influences on both course design(Smith, Heindel, & Torres-Ayala, 2008) and student retention, attitudes, and performance (Arbaugh & Rau, 2007; Hornik, Sanders, Li, Moskal, & Dziuban, 2008). Such initial findings suggest that potential disciplinary effects in e-learning should be a focus of prospective researchers.
INCREASED GLOBAL COVERAGE AND CROSS-CULTURAL STUDIES Although our book has a multi-national pool of contributors, regional attitudes toward e-learning are, thus far universally positive. For example, portions of the Middle East view e-learning and distance education with disdain (Rovai, 2009), as is indicated in part by studies from the region that resort to using prison inmates as research samples (Al Saif, 2007). As universities from other parts of the world collaborate to create branch campuses or joint ventures with institutions in the Middle East, assessing influences of present attitudes toward and potential changes in attitudes toward the conduct of e-learning yields a productive stream of research. Also, although our book does not examine Asian e-learning settings,
xxii
a review of contributors from prominent e-learning journals such as Computers & Education suggests that Asian scholars and institutions will become increasingly influential in shaping e-learning research agendas. We certainly would welcome collaborations between scholars from these emerging regions and those where e-learning has become comparatively well-established.
ISSUES IN E-LEARNING EMPIRICAL RESEARCH As we are now entering what may be a golden age of e-learning, we have witnessed increasing proportion of e-learning empirical research using highly sophisticated research tools such as structural equation modeling. A review of the major works of Kuhn (1970a), Kaplan (1964), (Dubin 1969), and Cushing (1990) describes the process by which an academic discipline becomes an establishment in terms of four steps: 1. Consensus building among a group of scientists about the existence of a body of phenomena that is worthy of scientific study (Kuhn 1970a); 2. Empirical study of the phenomena to establish a particular fact or a generalization (Kaplan 1964); 3. Articulation of theories to provide a unified explanation of established empirical facts and generalizations (Kuhn 1970a); and 4. Paradigm building to reach a consensus on the set of elements possessed in common by practitioners of a discipline such as shared commitments, shared values, and shared examples (exemplars) (Kuhn 1970a). More than 30 years ago, Keen demanded three prerequisites to be fulfilled for the management Information Systems (MIS) area to be a coherent and substantive field. They are defining the dependent variables, clarifying the reference disciplines, building a cumulative research tradition. An important objective of this book is an attempt to clearly define the dependent variables in e-learning empirical research. The review of Arbaugh et al. (2009) suggests that the e-learning systems area is building a cumulative research tradition through empirical and non-empirical research during the past decade. In our judgment, we are heading toward the stage of articulating e-learning theories to provide a unified explanation of established empirical facts and generalizations. To articulate e-learning theories, we need consensus building as to what dependent and independent variables are worthy of investigation. During the past decade, a large number of e-learning empirical studies were conducted to investigate the impacts of too numerous factors. To provide a useful lesson to the e-learning community, let us use an example from decision support systems (DSS) and group support systems (GSS) empirical research. Eom summarized the state of DSS/GSS empirical over the past decades this way (Eom, 2007, p.436): A previous study (Farhoomand 1987) shows an increasing proportion of empirically based DSS research. Nevertheless, this accelerating rate of DSS research publication and the steady transition from non-empirical to empirical studies have not resulted in DSS theory building. Some researchers abandoned their efforts to develop context-free DSS theories and suggested that future DSS research should focus on modeling the specific “real world” target environment. This environment is characterized in terms of organizational contexts, group characteristics, tasks, and EMS environments (Dennis et al. 1990-1991). Other empirical researchers continue their efforts to integrate the seemingly conflicting results of empirical experiments (Benbasat and Lim 1993). However, the considerable amount of empirical research in GDSS,
xxiii
user interface/individual differences, and implementation has produced conflicting, inconsistent results due to methodological problems, the lack of a commonly accepted causal model, different measures of dependent variables, hardware and software designed under different philosophies, etc. (Benbasat et al. 1993; Dennis et al. 1988; Jarvenpaa et al. 1985; Pinsonneault and Kraemer 1989; Zigurs 1993) This problem of empirical research in the DSS/GSS areas possibly could reoccur in e-learning empirical research. The four causes that produced inconsistent/conflicting empirical results in the DSS are with e-learning empirical research. Some of evident problems in e-learning empirical research stem from the comparison of apples and oranges. Occasionally, some studies compare the results of studies based on samples of different subjects in terms of age and generations, gender, disciplines, course levels (undergraduate vs. graduate), demographics, socio-economic status, et cetera. Perceived e-learning outcomes and level of satisfactions are the results of interplay of many psychological, socio-economic, cultural, and other variables. In future e-learning empirical research, it may be a prudent direction to focus on the specific “real world” target environment, seeking to develop context-specific mid-range theory, rather than context free e-learning theory building. The models need to be parsimonious. At the same time, it must be complex enough to capture the reality of this world. Measurement issues are another area of concerns. These issues include the design of questionnaires, software issues, and as we see more empirical research, there is a need for informing the readers the details of survey instruments and list of questions used. For example, there was no information on survey instruments in some studies (LaPointe & Gunawardena, 2004). Further, there is a need for developing standardized indicator variables. Certainly, the same group of students’ responses may differ significantly when they respond to the two different questions to measure learning outcomes of online education. • •
I have learned a lot in this course (Peltier, Drago, & Schibrowsky, 2003). I feel that I learned more in online courses than in face-to-face courses (Eom, Ashill, & Wen, 2006).
There are also software issues. For example, the two common approaches for SEM are the covariancebased approach used in LISREL, AMOS, and EQS and the variance-based approach used in PLS-PC, PLS-Graph, Smart-PLS and XLSTAT-PLS. The fundamental differences between LISREL and PLS are reflected in the PLS and LISREL algorithms and their optimum properties. The same data set may produce different sets of results (unacceptable vs. acceptable) with the covariance-based and variance based approach, due to the differences of the two approaches. Sean B. Eom Southeast Missouri State University, USA J.B. Arbaugh University of Wisconsin Oshkosh, USA
REFERENCES Al Saif, A. A. (2007). Prisoners’ attitudes toward using distance education whilst in prisons in Saudi Arabia. Issues in Informing Science and Information Technology, 4, 125–131.
xxiv
Alavi, M., & Leidner, D. E. (2001). Research commentary: Technology-mediated learning – a call for greater depth and breadth of research. Information Systems Research, 12(1), 1–10. doi:10.1287/ isre.12.1.1.9720 Alexander, M. W., Perrault, H., Zhao, J. J., & Waldman, L. (2009). Comparing Aacsb faculty and student online learning experiences: Changes between 2000 and 2006. Journal of Educators Online, 6. Allen, I. E., & Seaman, J. (2010). Learning on demand: Online education in the United States 2009. Wellesley, MA: Babson Survey Research Group. Arbaugh, J. B. (2005). How much does “subject matter” matter? A study of disciplinary effects in online MBA courses. Academy of Management Learning & Education, 4(1), 57–73. Arbaugh, J. B., Godfrey, M. R., Johnson, M., Pollack, B. L., Niendorf, B., & Wresch, W. (2009). Research in online and blended learning in the business disciplines: Key findings and possible future directions. The Internet and Higher Education, 12, 71–87. doi:10.1016/j.iheduc.2009.06.006 Arbaugh, J. B., & Rau, B. L. (2007). A study of disciplinary, structural, and behavioral effects on course outcomes in online MBA courses. Decision Sciences Journal of Innovative Education, 5(1), 65–94. doi:10.1111/j.1540-4609.2007.00128.x Bernard, R. M., Abrami, P. C., Borokhovski, E., Wade, E., Tamim, R. M., & Surkes, M. A. (2009). A meta-analysis of three types of interaction treatments in distance education. Review of Educational Research, 79, 1243–1289. doi:10.3102/0034654309333844 Bernard, R. M., Abrami, P. C., Lou, Y., & Borokhovski, E. (2004). A methodological morass? How can we improve quantitative research in distance education. Distance Education, 25(2), 175–198. doi:10.1080/0158791042000262094 Bonk, C. J. (2009). The world is open: How Web technology is revolutionizing education. San Francisco, CA: Jossey-Bass. Connolly, M., Jones, C., & Jones, N. (2007). New approaches, new vision: Capturing teacher experiences in a brave new online world. Open Learning, 22(1), 43–56. doi:10.1080/02680510601100150 Coppola, N. W., Hiltz, S. R., & Rotter, N. G. (2002). Becoming a virtual professor: Pedagogical roles and asynchronous learning networks. Journal of Management Information Systems, 18, 169–189. Eom, S. B. (2007). The development of decision support systems research: A bibliometrical approach. Lewiston, NY: The Edwin Mellen Press. Eom, S. B., Ashill, N., & Wen, H. J. (2006). The determinants of students’ perceived learning outcome and satisfaction in university online education: An empirical investigation. Decision Sciences Journal of Innovative Education, 4(2), 215–236. doi:10.1111/j.1540-4609.2006.00114.x European Commission. (2010). ICT results, educating Europe: Exploiting the benefits of ICT. Retrieved August 31, 2010, from http://cordis.europa.eu/ictresults/pdf/policyreport/INF%207%200100%20 IST-R%20policy%20report-education_final.pdf
xxv
Garrison, D. R., Anderson, T., & Archer, W. (2000). Critical inquiry in a text-based environment: Computer conferencing in higher education. The Internet and Higher Education, 2, 87–105. doi:10.1016/ S1096-7516(00)00016-6 Hayes, S. K. (2007). Principles of finance: Design and implementation of an online course. MERLOT Journal of Online Teaching and Learning, 3, 460–465. Haytko, D. L. (2001). Traditional versus hybrid course delivery systems: A case study of undergraduate marketing planning courses. Marketing Education Review, 11(3), 27–39. Hiltz, S. R., & Turoff, M. (1978). The network nation: Human communication via computer. Reading, MA: Addison-Wesley. Hornik, S., Sanders, C. S., Li, Y., Moskal, P. D., & Dziuban, C. D. (2008). The impact of paradigm development and course level on performance in technology-mediated learning environments. Informing Science, 11, 35–57. Kellogg, D. L., & Smith, M. A. (2009). Student-to-student interaction revisited: A case study of working adult business students in online course. Decision Sciences Journal of Innovative Education, 7(2), 433–454. doi:10.1111/j.1540-4609.2009.00224.x LaPointe, D. K., & Gunawardena, C. N. (2004). Developing, testing and refining of a model to understand the relationship between peer interaction and learning outcomes in computer-mediated conferencing. Distance Education, 25(1), 83–106. doi:10.1080/0158791042000212477 Liu, X., Magjuka, R. J., Bonk, C. J., & Lee, S. (2007). Does sense of community matter? An examination of participants’ perceptions of building learning communities in online course. Quarterly Review of Distance Education, 8(1), 9–24. Marriott, N., Marriott, P., & Selwyn, N. (2004). Accounting undergraduates’ changing use of ICT and their views on using the internet in higher education – a research note. Accounting Education, 13(S1), 117–130. doi:10.1080/0963928042000310823 Means, B., Toyama, Y., Murphy, R., Bakia, M., & Jones, K. (2009). Evaluation of evidence-based practices in online learning: A meta-analysis and review of online learning studies. Washington, DC: U.S. Department of Education. Morgan, G., & Adams, J. (2009). Pedagogy first: Making Web technologies work for soft skills development in leadership and management education. Journal of Interactive Learning Research, 20, 129–155. Peltier, J. W., Drago, W., & Schibrowsky, J. A. (2003). Virtual communities and the assessment of online marketing education. Journal of Marketing Education, 25(3), 260–276. doi:10.1177/0273475303257762 Phipps, R., & Merisotis, J. (1999). What’s the difference: A review of contemporary research on the effectiveness of distance learning in higher education. Washington, DC. Retrieved from www.ihep.com/ PUB.htm Piccoli, G., Ahmad, R., & Ives, B. (2001). Web-based virtual learning environments: A research framework and a preliminary assessment of effectiveness in basic IT skills training. Management Information Systems Quarterly, 25(4), 401–426. doi:10.2307/3250989
xxvi
Rovai, A. P. (2009). The Internet and higher education: Achieving global reach. Cambridge, UK: Chandos Publishing. Schniederjans, M. J., & Kim, E. B. (2005). Relationship of student undergraduate achievenment and personality characteristics in a total Web-based environment: An empirical study. Decision Sciences Journal of Innovative Education, 3(2), 205–221. doi:10.1111/j.1540-4609.2005.00067.x Shea, P. (2007). Bridges and barriers to teaching online college courses: A study of experienced online faculty in thirty-six colleges. Journal of Asynchronous Learning Networks, 11(2), 73–128. Simmering, M. J., Posey, C., & Piccoli, G. (2009). Computer self-efficacy and motivation to learn in a self-directed online course. Decision Sciences Journal of Innovative Education, 7(1), 99–121. doi:10.1111/j.1540-4609.2008.00207.x Sitzmann, T., Ely, K., Brown, K. G., & Bauer, K. N. (2010). Self-assessment of knowledge: A cognitive learning or affective measure? Academy of Management Learning & Education, 9, 169–191. Sitzmann, T., Kraiger, K., Stewart, D., & Wisher, R. (2006). The comparative effectiveness of Web-based and classroom instruction: A meta-analysis. Personnel Psychology, 59, 623–664. doi:10.1111/j.17446570.2006.00049.x Smith, G. G., Heindel, A. J., & Torres-Ayala, A. T. (2008). E-learning commodity or community: Disciplinary differences between online courses. The Internet and Higher Education, 11, 52–159. doi:10.1016/j. iheduc.2008.06.008 Stahl, B. C. (2004). E-teaching – the economic threat to the ethical legitimacy of education? Journal of Information Systems Education, 15, 155–162. Tanner, J. R., Noser, T. C., & Totaro, M. W. (2009). Business faculty and undergraduate students’ perceptions of online learning: A comparative study. Journal of Information Systems Education, 20(1), 29–40. Venkatesh, V., Morris, M. G., Davis, G. B., & Davis, F. D. (2003). User acceptance of Information Technology: Toward a unified view. Management Information Systems Quarterly, 27(3), 425–478. Wan, Z., Wang, Y., & Haggerty, N. (2008). Why people benefit from e-learning differently: The effects of psychological processes on e-learning outcomes. Information & Management, 45, 513–521. doi:10.1016/j.im.2008.08.003 Wilkes, R. B., Simon, J. C., & Brooks, L. D. (2006). A comparison of faculty and undergraduate students’ perceptions of online courses and degree programs. Journal of Information Systems Education, 17, 131–140.
Section 1
Theoretical Frameworks
1
Chapter 1
Multi-Disciplinary Studies in Online Business Education: Observations, Future Directions, and Extensions J. B. Arbaugh University of Wisconsin Oshkosh, USA
ABSTRACT This chapter argues that research in online teaching and learning in higher education should take a multi-disciplinary orientation, especially in settings whose curricula are drawn from several disciplinary perspectives such as business schools. The benefits of a multi-disciplinary approach include curriculum integration and enhanced communication and collective methodological advancement among online teaching and learning scholars from the disciplines that comprise the integrated curricula. After reviewing multi-disciplinary studies in business education published to date, the chapter concludes with recommendations for advancing research in this emerging stream. Some of the primary recommendations include the use of academic discipline as a moderating variable, more studies that incorporate samples comprised of faculty and/or undergraduate students, and the development of more comprehensive measures of student learning.
INTRODUCTION Over the past decade, the delivery of management education via the internet has become increasingly common, even among institutions accredited by DOI: 10.4018/978-1-60960-615-2.ch001
AACSB International (Alexander, Perrault, Zhao, & Waldman, 2009; Popovich & Neel, 2005). With increasing acceptance of this educational medium has come increased research attention, approaching 200 peer-reviewed articles on business disciplines during the last decade (Arbaugh, Godfrey, Johnson, Leisen Pollack, Niendorf, &
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
Multi-Disciplinary Studies in Online Business Education
Wresch, 2009). However, because many of these articles employed research samples that examined less than five class sections within a single business discipline, their ability to inform business school educators and administrators regarding the design, development, and integration of a business curriculum is somewhat limited. When a business school considers the development, design, and delivery of an online degree program, one might expect that an integrated curriculum of well-designed courses that capture the intricacies of the differences and the interdependencies between business disciplines in a technologically sound manner would be an excellent starting point. However, the business school is multi-disciplinary in orientation, there tends to be substantial variety in the development and delivery of business school curricula, particularly at the MBA level (Navarro, 2008; Rubin & Dierdorff, 2009). Considering that recent exploratory research suggests that there may be fundamental disciplinary-related differences in the design and conduct of online courses in business schools (Arbaugh, Bangert, & Cleveland-Innes, 2010), the need to examine online teaching and learning in business schools comprehensively rather than by individual silos becomes increasingly apparent if these schools are to provide a quality educational experience for increasingly demanding stakeholders (Julian & Ofori-Dankwa, 2006; O’Toole, 2009).
MAIN FOCUS OF THE CHAPTER In this chapter, we discuss why the relative lack of work that comprehensively examines the business school curriculum in online teaching and learning is cause for concern, and articulate the potential problems that this lack of attention may create for business schools going forward. We also examine both epistemological and practical reasons for which disciplinary differences between components of the business school curriculum
2
matter in online and blended delivery, and why and how studies of online business education should reflect and better capture these differences. That discussion is followed by a report of the primary findings from multi-disciplinary studies in business education published to date. The chapter concludes with a discussion of potential implications for research specific to business education that could be extended to studies of online teaching and learning in other disciplines. Although this chapter explicitly examines the state of research on online teaching and learning within business schools, we hope that it also may stimulate scholars in other disciplines to consider their fields more comprehensively when designing and conducting research.
WHY SHOULD STUDIES OF ONLINE BUSINESS EDUCATION BE MULTI-DISCIPLINARY? As the volume of research on online teaching and learning in business education has increased dramatically during the past ten years, scholars have begun to more actively disseminate these findings. Although the volume and quality of research in online and blended business education has increased dramatically, the rate of progress across the disciplines is rather uneven (Arbaugh et al., 2009). Disciplines such as Information Systems (IS) and Management had relatively active research communities, but disciplines such as Finance and Economics have little published research (Arbaugh, 2010a). Worse yet, these scholars tend to communicate only within their particular discipline rather than engaging in crossdisciplinary dialogue with scholars from other business disciplines, let alone scholars engaged in the broader online teaching and learning literature. Although such an approach may ground a study within its respective field, this approach becomes particularly problematic for teaching because students typically receive at least some
Multi-Disciplinary Studies in Online Business Education
exposure to a variety of disciplines within the business school curriculum and a particular discipline’s research progress lags the others. Therefore, widely varying research depth across disciplines also likely will result in online and/ or blended courses whose instructional quality across disciplines varies widely (Arbaugh, 2005a; Bryant, Kahle, & Schafer, 2005). This approach to researching online and blended business education employed to date has produced other negative consequences. In addition to the inconsistent research quality that results from lack of cross-disciplinary dialogue, researchers in one discipline are left unaware of theoretical perspectives and conceptual frameworks from related disciplines that could help explain phenomena in their own discipline (Wan, Fang, & Neufeld, 2007). Because the portability of theories and methods of business disciplines to research in learning and education varies widely, research in those less portable disciplines is not likely to advance substantially absent such collaborative endeavors (Arbaugh, 2010a). Finally, those with responsibilities for coordinating and directing online degree programs in business schools have little evidence to guide them when making decisions regarding the comprehensive design, emphasis, and conduct of the subjects, or for assessing the effectiveness of their current offerings. Having addressed some of the negative consequences of the current general state of affairs, let us now present some positive reasons for developing cross-disciplinary research in online business education.
Business Disciplines Differ from Each Other Epistemologically and Behaviorally With scholarly roots in the sociology and history, researchers have been studying disciplinary differences in higher education for about forty years (Kuhn, 1970; Lohdahl & Gordon, 1972; Thompson, Hawkes, & Avery, 1969). Although much of
the disciplinary differences in higher education literature is devoted to identifying and examining sociological and behaviorial differences across disciplines (Becher, 1994; Hativa & Marincovich, 1995; Lattuca & Stark, 1994; Shulman, 1987, 2005; Smeby, 1996), to date epistemological differences have been the primarily adopted characteristic from this literature for use in other educational research. One of the more popular epistemological frameworks for distinguishing differences between academic disciplines was developed by Anthony Biglan (1973). Biglan’s framework categorizes academic disciplines based on their positioning along three dimensions: 1) the existence of a dominant paradigm, 2) a concern with application, and 3) a concern with life systems. These dimensions have come to be operationalized as hard/soft, pure/applied, and life/non-life respectively. Most of the subsequent research on Biglan’s framework has focused on the paradigm dominance and emphasis on application dimensions. Hard disciplines are characterized by coalescence around dominant paradigms. In other words, scholars in these fields have general agreement regarding “how the world works.” Conversely, soft disciplines are characterized by having multiple competing paradigms available as possible explanations of their phenomena of interest. Pure fields place more attention on knowledge acquisition, whereas application and integration receive stronger emphasis in applied fields. Although much of the research attention in the disciplinary differences literature has focused on these first two dimensions (Becher, 1994; Becher & Trowler, 2001; Neumann, 2001; Neumann, Parry, & Becher, 2002), the life/non-life dimension should not be ignored. This dimension may have particularly important implications for distinguishing disciplines in of schools and colleges of business. Because the business school has been established as a professional school and therefore has focused on producing learners that have applied skills, for the most part its disciplines
3
Multi-Disciplinary Studies in Online Business Education
have been considered to reside on the “applied” side of the Pure/Applied dimension (Biglan, 1973; Khurana, 2007; Trank & Rynes, 2003). As a relatively young area of study, the paradigm development of the disciplines of the business school have been considered to be behind that of those considered to be “hard” disciplines, but there is increasing consensus that business disciplines vary in degrees of “hardness.” “Hard, applied, non-life” disciplines, such as accounting and finance, call for progressive mastery of techniques in linear sequences based upon factual knowledge. Students in “hard, applied” disciplines are expected to be linear thinkers. Teaching activities generally are focused and instructive, with the primary emphasis being on the teacher informing the student (Neumann, 2001; Neumann et al., 2002). Emphasis on factual knowledge, and by extension examinations, extends from “hard, pure” to “hard, applied” disciplines, although problem solving will be emphasized more in the “hard, applied” disciplines. Conversely, teaching in “soft, applied, life” disciplines, such as management and marketing, tends to be more free-ranging, with knowledge-building being a formative process where teaching and learning activities tend to be constructive and reiterative, shaped by both practically honed knowledge and espoused theory. Students are expected to be more lateral thinkers than those in “hard” disciplines. As is the case in the field of education, scholars of educational practice in these disciplines often are encouraged to refer to class participants as learners rather than students (Dehler, Beatty, & Leigh, 2010; Whetten, Johnson, & Sorenson, 2009). In the softer disciplines, essays and group projects tend to predominate, and self-assessments are common. Because of the emphasis on developing practical skills, there is a greater need for constructive, informative feedback on assessments. Emphasis on widely transferrable skills generally will be greater in “soft, applied” disciplines than “hard, applied” ones, as will reflective practice and lifelong learning.
4
By possessing characteristics of both of these disciplinary extremes, the “soft, applied, non-life” orientation of the information systems discipline provides elements of each. Like the harder, nonlife disciplines, they have a strong emphasis on inanimate objects, such as software code and applications of technology. However, like the softer disciplines, they also place strong emphasis on group projects and discussion-based learning (Alavi, 1994; Benbunan-Fich & Hiltz, 2003). Such characteristics suggest that a particular challenge for IS instructors is balancing the roles of content expert and learning facilitator. Such epistemological and pedagogical variance across disciplines is sure to create challenges for organizing them into a unified curriculum, which is addressed in this chapter’s next section.
Challenges of the Integrated Business Curriculum For online business education to be effective, it seems appropriate first to consider effectiveness from a programmatic or curricular perspective rather than from the perspective of individual disciplines. Business programs seek to deliver an integrated curriculum, albeit with slight variations across institutions, usually depending upon areas of faculty or institutional competency or specialized regional or student needs, particularly at the graduate level (Engwall, 2007; Julian & Ofori-Dankwa, 2006; Rubin & Dierdorff, 2009). Therefore, it behooves business schools to ensure to the greatest extent possible that its delivery of its online programs is of consistent quality across the curriculum. It does not benefit a business school if courses in only one or two disciplines of its online offerings are well-designed and delivered. Therefore, research that conceptualizes and examines online business education in ways that consider multiple disciplines in the same study is particularly welcome. The previous discussion on epistemological and pedagogical differences between disciplines
Multi-Disciplinary Studies in Online Business Education
suggests that courses in each should be designed differently for delivery in online and blended environments. Given the variety of disciplinary perspectives housed within a single field of study, the potential effects of disciplinary differences in online teaching and learning and potential conflicts between preferred instructional approaches are most likely to manifest themselves in a multidisciplinary area of study such as business. However, such differences often are not factored into discussions of the design of online and blended courses in business education. Subject matter and disciplinary effects were considered in a somewhat general manner as part of early instructional design models (i.e. Jonassen, Tessmer, & Hannum, 1999; Reigeluth, 1983; Reiser & Gagne, 1983; Van Patten, Chao, & Riegeluth, 1986). However, such effects usually are either absent in contemporary theories of online learning, or at best mentioned as a sort of “black box” category. In fact, the contemporary instructional design literature often notes how discipline-related issues are to be left to “subject matter experts” (Dempsey & Van Eck, 2002; Merrill, 2001). Although some scholars are beginning to attempt to bridge the gaps between curriculum theorists and instructional designers to encourage the reintroduction of dialogue between these communities that was lost sometime during the 1960s (Petrina, 2004), progress toward such dialogue has been rather slow. Unfortunately, such deferment of consideration of the integration of discipline and design makes it convenient for designers and administrators to advocate similar designs for all online courses. This state of affairs is particularly troublesome for the development and delivery of online degree programs in business. What guidance that does exist from the online business education literature typically calls for standardization of structure and organization of course content, requirements, and basic pedagogical operations as much as possible (Dykman & Davis, 2008a; 2008b), and any design modifications that are made be done on the basis of characteristics such as learner maturity,
technology, pedagogy, or content usage rather than disciplinary differences (Bocchi, Eastman, & Swift, 2004; Millson & Wilemon, 2008; Rungtusanatham, Ellram, Siferd, & Salik, 2004), which given this chapter’s prior discussion should now be seen as somewhat problematic. One of the implications of such an orientation is a lack of fit between the design of online courses, course management systems, and disciplines’ signature pedagogies (Smith, Heindel, & Torres-Ayala, 2008). For example, collaborative constructivism often is seen as an organizing framework for online course design in higher education, with emphasis on instructor-facilitated group activities (Garrison, Anderson, & Archer, 2000; Jonassen, Davidson, Collins, Campbell, & Haag, 1995). However, the applicability of such approaches appears to vary widely across disciplines in higher education (Murray & Renaud, 1995; Smart & Ethington, 1995). In a study of forty online MBA courses, Arbaugh and Rau (2007) found that disciplinary differences accounted for up to sixteen percent of the variance in student perceived learning and sixty-seven percent of the variance in satisfaction with the internet as an education delivery medium, and that interaction between participants, the instructor, and fellow participants had differing effect sizes in predicting perceived learning. Although preliminary, these findings suggest the importance of accounting for disciplinary differences when developing online degree programs in business. It is understandable that program directors and course designers might desire an instructional design pattern that targets the center of a “hard-soft” continuum for the sake of maintaining program consistency. Although such an approach could work well for disciplines around the center of such a continuum, this can be particularly problematic for those at the extremes. Such an approach may result in prescriptions of course structure and activities that are not soft enough for the “soft” disciplines and not hard enough for the “hard” ones (Arbaugh, 2010a). However, in
5
Multi-Disciplinary Studies in Online Business Education
light of the observations of epistemological and behavioral differences between “soft, applied” and “hard, applied” disciplines gleaned from the disciplinary differences literature, not to mention the issues created by the “life/non-life” dimension, it is questionable whether such standardization of course designs across disciplines should take place in an online business education curriculum, or even whether the standardization is even desirable.
Methodological and Statistical Analysis Benefits Finally, research of online teaching and learning in business education should be increasingly multidisciplinary in nature for the practical methodological and statistical benefits such designs will provide. Multi-disciplinary research samples by their nature tend to have comparatively larger sample sizes, which affords researchers opportunities to employ multivariate statistical techniques such as factor analysis (both exploratory and/or confirmatory), PLS, SEM, and HLM that have been discussed in some of this book’s other chapters. Also, larger samples facilitate the introduction of additional control variables due to increased statistical power. Considering that randomized experimental designs typically are not feasible for studies of business education, appropriate controls are vital for establishing the rigor necessary for the studies to produce valid and reliable evidence that can be used for designing and developing courses and programs. Besides the noted role of academic discipline, other potential control variables that warrant increased attention in research designs include participant prior experience with online teaching and/or learning (Anstine & Skidmore, 2005; Arbaugh, 2005a; 2005b; Jones, Moeeni, & Ruby, 2005; Liu, Magjuka, & Lee, 2006), student academic major (Daymont & Blau, 2008; Simmering, Posey, & Piccoli, 2009), course and/or instructor (Arbaugh, 2010b, Hwang & Arbaugh, 2006; Klein, Noe, & Wang, 2006; Williams, Duray, & Reddy, 2006) and participant demographics
6
(Alavi, Marakas, & Yoo, 2002, Benbunan-Fich & Hiltz, 2003). Finally, comparative studies of differing platforms and approaches to online delivery will benefit from the increased generalizability associated with relatively large, multi-disciplinary samples (Bernard, Abrami, Borokhovski, Wade, Tamim, Surkes, & Bethel, 2009). To illustrate at least some of these benefits, let us now examine the research published to date.
FINDINGS AND CONCLUSIONS FROM PREVIOUS MULTIDISCIPLINARY STUDIES OF ONLINE BUSINESS EDUCATION Multi-disciplinary research in online business education uses a variety of designs, such as single-institution studies the examine courses from several disciplines, program-level studies that survey students about their collective experiences with online learning within a degree program, and broad-based institutional surveys. To date, these studies have focused largely on students and their perceptions of factors influencing online and/or blended course effectiveness.
Multi-Course and CrossDisciplinary Studies Considering that studies of instructors’ initial single-course experiences with online business education only began to be published in the mid1990s, it is surprising that multi-course, multidiscipline empirical studies started appearing at the turn of the century. Among these was a series of studies by Arbaugh (2000a; 2000b; 2001) that sought to identify general predictors of students’ perceived learning and their satisfaction with the internet as an educational delivery medium. Using samples that included courses in disciplines such as Accounting, Finance, Management, Information Systems, and Operations Management, these early studies suggested that these course outcomes
Multi-Disciplinary Studies in Online Business Education
were highly associated with the extent to which learners perceived it to be easy to interact with others in the course, the extent to which the instructor encouraged interaction, the perceived flexibility of the delivery medium, and the extent to which the instructor engaged in behaviors that reduced social distance between him/herself and the learners, thereby noting characteristics of interaction between participants in both “soft” and “hard” disciplines. Neither student age nor gender predicted course outcomes. Perhaps not surprisingly, as participants gained online course experience, their satisfaction with taking courses online also increased. Although these studies support the idea purported in the conceptual models that instructors move from being information disseminators to discussion facilitators in the online environment, they also suggested that instructors were the most influential participants in the online classroom. Subsequent multi-disciplinary studies have branched into several research streams, including additional examination of topics such as participant interaction, the role of technology, and introducing topics such as disciplinary effects and student and instructor characteristics and behaviors, but the finding of the instructor as focal point in online business education remained a consistent theme in the research as the decade progressed (Arbaugh, 2010a; Kellogg & Smith, 2009). Although initial studies tended to be grounded largely in discipline-based theoretical perspectives, we have begun to see research actively building upon prior online business education research. For example, studies of epistemological teaching (objectivist vs. constructivist) and social learning (individual vs. group) dimensions by Arbaugh & Benbunan-Fich (2006; Benbunan-Fich & Arbaugh, 2006) were grounded directly upon Leidner and Jarvenpaa’s (1995) seminal conceptual framework. Reflective of the emerging theme of the importance of the instructor, their empirical tests of this model, which used a sample of forty MBA courses from multiple disciplines, found
that courses designed in group-based objectivism, where group-oriented learning activities were incorporated with instructor-centered content delivery, were found to have the highest levels of student perceived learning. Support for the principles of the instructor’s facilitative course role and well-organized content recently has been provided in multi-disciplinary studies of Garrison, Anderson, and Archer’s (2000) Community of Inquiry framework by Arbaugh and Hwang (2006) and Arbaugh (2008). Using structural equation modeling, Arbaugh and Hwang (2006) found empirical support that these were three distinctive components in a study that included fourteen courses and at least four distinct disciplines. Arbaugh (2008) found that teaching presence significantly predicted both perceived learning and delivery medium satisfaction. In this study of fifty-five courses from multiple business disciplines, teaching presence and cognitive presence were equally strong predictors of student learning, but social presence was three times as strong a predictor of delivery medium satisfaction than was teaching presence. Other multi-disciplinary work supporting the instructor’s importance in online teaching and learning was provided by Peltier, Schibrowsky, and Drago (2007). This update of a previously developed framework (Peltier, Drago, & Schibrowsky, 2003) argued that learning quality in business education was a function of the pacing of course content, participant interaction, course structure, instructor mentoring, and content presentation quality. Although they found several significant relationships between the predictors in a sample consisting of students from eighteen courses in multiple disciplines, only the instructorcontrolled activities of mentoring and the pacing of course content were strongly associated with learning quality. In spite of the previously discussed findings noting the importance of instructors, to date student characteristics have received much more research attention than have instructor character-
7
Multi-Disciplinary Studies in Online Business Education
istics. The student characteristics most commonly examined have been demographically-oriented variables such as age, gender, and prior experience with technology and online learning. Recent multi-disciplinary studies generally have found little relationship between student age or gender and online course outcomes in business education (Arbaugh, 2002, 2005b; Arbaugh & Rau, 2007; Klein et al., 2006; Williams et al., 2006). As multi-disciplinary studies have been able to draw from samples of students with more varied ranges of prior experiences with online learning, there is increasing evidence of a prior experiencecourse outcomes relationship (Arbaugh, 2005a; Arbaugh & Duray, 2002; Drago, Peltier, Hay, & Hodgkinson, 2005). However, the amount of prior online learning experience needed to produce that relationship may not be extensive. Analyzing data from students who had participated in up to seven online MBA courses, Arbaugh (2004) found that the most significant changes in student perceptions of the flexibility, interaction, course software, and general satisfaction with online courses occurred between the students’ first and second online course. He also found that there were no significant differences in students’ perceived learning with subsequent course experiences. One of the most examined learner behavioral characteristics in multi-disciplinary studies is participants’ interaction with other course participants. Consistent with the theme of the importance of the instructor, the findings of this research emphatically suggest that learner-instructor interaction is a strong predictor of student learning (Arbaugh, 2000b, 2005b; Arbaugh & BenbunanFich, 2007; Arbaugh & Hornik, 2006; Drago et al., 2005; Drago, Peltier, & Sorensen, 2002; Eom, Wen, & Ashill, 2006; Peltier et al., 2007) and delivery medium satisfaction (Arbaugh, 2000a, 2002, 2005b; Eom et al., 2006). In fact, results from multi-disciplinary studies suggest that learnerinstructor interaction may be the primary variable for predicting online course learning outcomes in online graduate business education (Arbaugh &
8
Rau, 2007; Drago et al., 2002; Kellogg & Smith, 2009; Marks, Sibley, & Arbaugh, 2005). Although learner-learner interaction is deemed as, or at least implied to be, a necessary element of online business courses, there is increasing evidence that the primacy of learner-learner interaction as a universal course design tactic may not hold for online business education. Some early studies found that learner-learner interaction was a stronger predictor of course outcomes than learner-instructor interaction (Arbaugh, 2002; Peltier et al., 2003), but recent studies have found that learner-instructor interaction is the stronger predictor (Arbaugh & Benbunan-Fich, 2007; Arbaugh & Rau, 2007; Marks et al., 2005). The progression of this research stream motivated Kellogg and Smith’s (2009) study of the influences of learner-content, learner-instructor, and learnerlearner interaction in their program’s required MBA course in Data Analysis. They found that students reported learning most from independent study assignments and least from learner-learner interaction. Although it is possible that the relatively “hard” disciplinary nature of this course may have lent itself less readily toward collaborative learning approaches, this study certainly raises questions regarding whether the use of collaborative approaches is universally applicable across the online business curriculum. Interest in the nature and types of interaction in which students partake also motivated Hwang and Francesco’s (2010) recent study of feedback-seeking channels in blended learning environments. These authors found that although these students actively sought feedback from fellow learners and their instructors, such behavior did not predict their learning performance. The primary positive predictor of learning performance in that study was the number of discussion forums in which a learner participated. However, intensely participating in such forums actually was negatively associated with learning performance. Considering that MBA students likely are much more appropriate audiences for collaborative approaches than are un-
Multi-Disciplinary Studies in Online Business Education
dergraduate business students (Arbaugh, 2010c), such findings should give instructors reason to pause when contemplating the development of course assignments and activities.
Influences of Technology Although there are emerging frameworks of effective online business education, multi-discipline studies also have drawn from established frameworks from business research. One such commonly used framework is the Technology Acceptance Model (TAM). Several multi-disciplinary studies have used the TAM as a grounding framework, either in its original form (Davis, 1989) or in the extended model (Venkatesh & Davis, 2000). Collectively, this research suggests that although the model had limited predictive power for novice online learners or early course management systems (Arbaugh, 2000b; Arbaugh & Duray, 2002), the TAM has emerged as a useful framework for explaining course management system usage and satisfaction with the Internet as an educational delivery medium (Arbaugh, 2005b; Landry, Griffeth, & Hartman, 2006; Saade, Tan, & Nebebe, 2008; Stoel & Lee, 2003). Davis and Wong (2007) found that perceived usefulness and ease of use had moderate effects on student intentions to use the CECIL system at the University of Auckland, but that student perceptions of flow and playfulness of the system (which, in turn, was highly influenced by the speed of the software) were stronger predictors of intentions to use. Arbaugh (2004) found that perceived usefulness and ease of use of Blackboard increased dramatically between the first and subsequent online course experiences. A recent comparative study of national differences by Saade and colleagues (2008) found that perceived usefulness explained behavioral intentions to use a web-based learning environment. However, although the full TAM model explained over 70 percent of the variance in behavioral intention among 120 Chinese undergraduate students, it
only explained 25 percent of the variance for 362 Canadian students.
Disciplinary Effects and Online Learning Outcomes Reflecting the concerns expressed earlier in this chapter, studies that investigate potential disciplinary effects have been slow in coming (Arbaugh, 2005a; Grandzol & Grandzol, 2006). However, we are beginning to see some initial efforts to examine disciplinary effects in online business education. Reflecting the idea of the importance of instructors in online business education, early studies of disciplinary effects specific to business courses suggested that their effects on learning outcomes may not be as large as that of instructor experience and behaviors. Arbaugh (2005a) hypothesized that disciplines for which instructors could commonly obtain doctoral degrees would be more significantly associated with course outcomes. Surprisingly, he found no such “doctoral” effect, perhaps because the relatively early development of the MBA program’s online offerings favored relatively experienced online instructors. Methodological issues also may have influenced Drago and colleagues’ (2002) study of course effectiveness in their study of eighteen MBA courses. They operationalized course content on the basis of its presentation and organization rather than by discipline. Although content was the primary predictor of learning (a possible precursor of Kellogg and Smith’s (2009) findings), they also found that instructor effects were more likely to predict perceptions of overall course effectiveness. A subsequent study of a more mature online MBA program by Arbaugh and Rau (2007) found more pronounced disciplinary effects. Their study, which used dummy coding of disciplines with Finance as the referent variable, found that disciplinary effects explained 67 percent of the variance in student satisfaction with the educational delivery medium in a sample of forty online MBA courses. In a recent study
9
Multi-Disciplinary Studies in Online Business Education
with a much larger sample, Hornik, Sanders, Li, Moskal, and Dziuban (2008) examined data from 13,000 students in 167 courses during 1997-2003. Included in this sample were undergraduate-level courses in information systems, along with courses in disciplines outside the business school such as the hard sciences, nursing, social sciences, and the humanities. Hornik and colleagues found that student grades were higher and withdrawals were lower for subjects with high paradigm development (“hard” disciplines), than for those with low paradigm development (“soft” disciplines, including information systems), and that these differences were most pronounced in advanced level courses. Conversely, in a recent study that examined disciplinary relationships to the three presences in the Community of Inquiry framework, Arbaugh and colleagues (2010) found that MBA students in courses from “hard” business disciplines reported significantly lower scores on cognitive presence than students in “softer” courses. This progression of studies suggests that disciplinary effects may become more pronounced with more mature learners as both online learners and instructors gain experience with the delivery medium.
Classroom Comparison Studies Comparison studies of online and classroom-based courses are quite common in studies of business education (Arbaugh et al., 2009). Therefore, it is not surprising that are some multi-disciplinary comparison studies. In one of the first of these studies, Dacko (2001) compared online and full-time MBA student emphases on skill development. In a prescient study of “hard” and “soft” disciplines, he found that full-time students were more likely to perceive a greater emphasis being placed on oral communication and interpersonal skills, and that online students were more likely to perceive a greater emphasis being placed upon analytical and decision-making skills. Both groups believed that their respective perceived emphases were
10
what the program should be emphasizing. In their comparison of 31 online and 27 classroom-based MBA courses, Drago and Peltier (2004) found that in spite the online courses having on average more than twice the enrollment of the classroombased courses, class size positively predicted course structure and instructor support for online courses, but that it negatively predicted them in the classroom courses. It was unclear whether these findings could be attributed to differences in instructor practice, differing student populations in the two mediums, or other factors. More recent comparative multi-discipline studies have employed methodologies of increasing rigor. Sitzmann, Kraiger, Stewart, and Wisher’s (2006) meta-analysis of 96 studies found that web-based instruction was 6 percent more effective than classroom instruction for teaching declarative knowledge. They found that the two methods essentially were equal for teaching procedural knowledge, and learners generally were satisfied equally with both methods as education delivery mediums. However, because only eight of these studies directly addressed business education, generalizing these conclusions to business disciplines may be premature. Another multidisciplinary comparison study by Klein, Noe, and Wang (2006) focused upon undergraduate business education in blended learning environments. Klein and colleagues found that blended learners with high learning goal orientation and who saw the environment as enabling instead of a barrier had higher motivation to learn, which in turn was associated with course outcomes. They also found that learners had more control, were required to take a more active role in their learning, and had greater motivation to learn in blended environments. Although the number of multi-disciplinary comparative studies in business education is relatively limited, their findings support the premise that for the most part, at worst generally there is no difference in learning outcomes between the two delivery mediums.
Multi-Disciplinary Studies in Online Business Education
Program-Level Studies in Online Business Education Other studies that have examined online business education beyond single-disciplinary perspectives include student surveys of perceptions and experiences with a degree program in its entirety and multi-institutional studies. Although these types of studies may not yield insights regarding particular disciplines, they do provide benefits such as identifying patterns of strong practice across institutions and affording students and administrators to see benefits and potential problems of the program from the perspectives of initial expectations of newly admitted students or of those have completed many or even all of its requirements. Such a perspective also allows for consideration of technological and pedagogical changes over time and opportunity to assess whether the collective parts of the program result in an integrated whole. In one early study, Terry (2001) used archival data to compare attrition rates between online and classroom offerings over a three year period. As a potential initial indication of disciplinary differences in business programs, he found that quantitatively-oriented courses had substantially higher attrition rates than their classroom-based equivalents or online courses in other disciplines. Subsequent program-level empirical studies of students have tended to rely upon survey data, focus on students at the graduate level, and study students at all stages of their progress through the MBA program. The flexibility of the learning format, networking opportunities, and virtual teaming skills are consistent themes among the students in these studies. Bocchi, Eastman, and Swift’s (2004) study of incoming cohorts to Georgia’s Web MBA program found that AACSB accreditation and perceived flexibility were drivers of students’ choice of program. Reflective of the multi-course studies reviewed earlier in the chapter, this study also suggested instructors would be key players in their program’s retention of students, and therefore encouraged them to provide
regular and structured interaction, communicate expectations clearly in their courses, and help students develop group facilitation skills. Program flexibility also was a key selection factor for the mid-career professionals in Grzeda and Miller’s (2009) study. However, in addition to wanting to acquire the traditional benefits of the MBA (knowledge of business, broader networks, and increased salary and promotion opportunities), they also wanted to be exposed to new ways of thinking about the world. Kim, Liu, and Bonk’s (2005) study of mid-program online MBA students found that these students generally were positive about their educational experiences. They valued the program’s flexible learning environment, appreciated the closer interaction with instructors and the opportunity to cultivate virtual teaming skills. However, they noted that difficulties with communication with peers, the desire for more interaction with instructors, and the lack of realtime feedback as program challenges. Multi-institutional studies are the least common of these types of studies. One study in this category was a collection of case studies compiled by Alavi and Gallupe (2003) during Fall 1999. Narrowing their focus to five early adopting exemplars, Alavi and Gallupe observed that these programs were implemented to support specific organizational strategies rather than as “add-on” features, and that the programs were supported by an internal culture of innovation. They also found that these programs required high levels of faculty training and ongoing student support, and that the resources required to develop and implement these initiatives were usually underestimated by as much as a factor of 2-3 times of the original estimates. Other institutional-level studies suggest that there are numerous close followers after these relatively early adopters. Popovich and Neel (2005) found that AACSB Internationally-accredited schools were rapidly following in the footsteps of these early adopters. They found that by 2002, at least 120 programs had degree programs delivered at least partially online, and nearly half of these
11
Multi-Disciplinary Studies in Online Business Education
had been started since Alavi and Gallupe’s study. However, Ozdemir, Altinkemer, and Barron, (2008) found that this initial population of online learning providers in the United States tended to be lower-tier programs in less densely populated states, and that these schools are much more likely to be offering graduate rather than undergraduate programs online. Other institutional studies have been focused at the instructor rather than program level. One primary conclusion from these studies is that instructors are largely self-trained for online teaching (Perreault, Waldman, Alexander & Zhao, 2002). More recent faculty studies have found that the perceived usefulness and rigor of online education are stronger predictors of acceptance of online education than is perceived ease of use of the technology (Gibson, Harris, & Colaric, 2008; Parthasurathy & Smith, 2009). Overall, it appears that faculty do appear to be more satisfied with online teaching as they gain experience, and instructor concerns regarding online teaching have diminished substantially during the first decade of the 21st century (Alexander et al., 2009). In fact, some faculty have adopted online teaching in part because it is perceived to reflect positively with external constituents on their institution’s reputation (Parthasurathy & Smith, 2009).
FUTURE RESEARCH DIRECTIONS FOR MULTI-DISCIPLINARY STUDIES More Use of Discipline as a Moderating Variable The first recommendation that emerges from a multi-disciplinary consideration of online teaching and learning is that disciplinary effects need to be examined more directly in future studies. For example, given the widely varying extent to which the various disciplines encourage learner-learner interaction (Arbaugh & Rau, 2007; Kellogg & Smith, 2009), it is possible that the equivocal find-
12
ings regarding its importance may be explained by disciplinary differences. Courses in ‘hard’ disciplines will likely require more emphasis on learner-instructor interaction; therefore, the importance of learner-learner interaction would be diminished (Arbaugh, 2010a). At minimum, this suggests that those conducting multi-course studies need to emphasize accounting for potential disciplinary effects when designing their studies.
Further Study of Additional Potential Moderating Effects To date, empirical studies of online management education almost exclusively examined direct effects, typically the relationship between potential influencers of course outcomes. With the exceptions of Klein and colleagues’ (2006) examination of the extent to which course delivery mediums moderate perceptual factors and student motivation to learn and Benbunan-Fich and Arbaugh’s (2006) study of the interaction of knowledge construction and group collaboration on learning perception, moderating relationships between variables have been almost completely ignored in online management education research. Besides the previously discussed characteristics such as academic discipline, online blends, and program level, other potential moderators that researchers might consider in multi-disciplinary studies are the influences of the particular learning technology (Arbaugh, 2005b; Martins & Kellermanns, 2004), or emerging theories of online learning effectiveness such as the Community of Inquiry (Garrison & Arbaugh, 2007; Heckmann & Annabi, 2006). Such study of potential moderating effects could lay a foundation for the development of mid-range and discipline-specific theories of online learning.
Multi-Disciplinary Studies in Online Business Education
Additional Conceptual and Operational Clarification of Blended Learning Although business education scholars have a somewhat hidden history of contributing to the literature on blended learning (Arbaugh et al., 2009), management education research generally has failed to be explicit regarding whether a course is purely online or blended until very recently (Hwang & Arbaugh, 2009; Hwang & Francesco, 2010; Klein et al., 2006). This can be attributed in part to the fact that the literature on blended learning generally has lacked precision in operationalizing the construct (Bonk & Graham, 2006; Garrison & Kanuka, 2004; Picciano & Dziuban, 2007). This lack of specificity in denoting the degree of blending within courses clearly has limited the business education literature’s ability to determine the conditions under which online or blended learning is most appropriate (Kellogg & Smith, 2009). Increasing conceptual and operational precision of just what constitutes a blended learning environment in such courses would allow researchers to address questions of optimal blends by content, discipline, and/or program level through comparison studies, much in the manner that fully online and fully classroom courses have been studied (Bernard et al., 2009; Sitzmann et al., 2006).
More Studies of Faculty Although many of the studies reviewed in this chapter suggest the importance of faculty in online and blended learning environments, very few studies in business education have directly employed samples of faculty. Encouraging faculty to study their colleagues could be both motivating and educational. Faculty have long been considered a ‘neglected resource’ in distance education (Dillon & Walsh, 1992). Although the reflective narrative account has been a common approach for sharing knowledge with fellow faculty in all
business disciplines, and faculty motivations for teaching online are beginning to receive research attention, studies examining the effects of faculty characteristics, such as age, gender, ethnicity, usage behaviors, and/or skill level in business education, are essentially non-existent (Arbaugh et al., 2009). This is because, in part, many published studies use the same instructor or a small number of instructors, thereby limiting opportunities for examining variance in instructor characteristics and behaviors (Anstine & Skidmore, 2005). As was noted earlier, reducing such limitations is a primary benefit of multi-disciplinary studies. In addition to the impact of faculty on online courses, scholars also might consider studying the impact of online learning on faculty. Online learning does not just change the culture of the classroom; it also changes the culture and conditions of the faculty work environment. Although online faculty work environments are beginning to receive research attention in higher education (Shea, 2007), the impact of online learning on faculty job satisfaction, organizational commitment, psychological and physical wellbeing have yet to be considered in the specific context of business schools (Arbaugh, Desai et al., 2010). Going outside their instructional roles, issues such as changes in work practices and environment, interaction with other faculty, and institutional commitment are topics that warrant further investigation.
More Rigorous Studies Using Undergraduate Samples A recent survey of higher education institutions in the USA found that approximately 4.6 million students had taken at least one online course during 2008, and that 82 per cent of these students were undergraduates (Allen & Seaman, 2010). However, many of the cross-disciplinary studies reviewed in this chapter have been comprised of MBA student samples. This focus of multidisciplinary work at the graduate level suggests
13
Multi-Disciplinary Studies in Online Business Education
that perspectives from undergraduate students are underrepresented in the online and blended business education literature. Although this may reflect that students are widely distributed across colleges and majors at the undergraduate level, the sheer magnitude of the dominance of undergraduates as online learners makes it difficult to infer that the proportion of research conducted in business education to date is representative of the actual practice of online education being conducted in business schools (Arbaugh, 2010c). Although the number of exemplary multi-course studies of undergraduate business students is increasing (i.e. Gratton-Lavoie & Stanley, 2009; Hwang & Francesco, 2010; Klein et al., 2006; Nemanich et al., 2009; Martins & Kellermanns, 2004), scholars affiliated with business schools that are making inroads into online and blended delivery of undergraduate education have an opportunity to influence the entirety of the curriculum by pursuing the research opportunities provided by these initiatives.
More Scholars Studying the Topic Although there has been a boom in the volume of research on online management education, this literature still suffers from a lack of a sizeable body of dedicated scholars studying the phenomenon. As this chapter demonstrates, the number of scholars making consistent contributions through multi-disciplinary studies is comparatively few. However, the positive side of this present state of the literature is that a new entrant could become somewhat established in the field relatively quickly. The rise of educational research journals in business over the last decade such as Academy of Management Learning & Education (AMLE) and the Decision Sciences Journal of Innovative Education (DSJIE) as highly regarded research outlets should provide legitimacy for more business scholars to conduct research that examines the development and delivery of educational content. Why not invest that attention toward an
14
emerging phenomenon that readily and increasingly pervades our educational experience such as online and blended learning!?
More Comprehensive and Objective Measures of Learning Perceptual measures of course outcomes such as Alavi’s (1994) measure of perceived learning and Arbaugh’s (2000a) measure of delivery medium satisfaction have allowed management education researchers to design multi-course, multi-instructor, and multi-discipline studies, thereby increasing the external validity of research findings. However, such measures do not allow for objective assessment of student performance. Of even greater concern is reflected in the results of a recent meta-analysis suggesting that such learner self-assessment items may measure affective rather than cognitive outcomes (Sitzmann, Ely, Brown, & Bauer, 2010). However, reliance upon seemingly objective measures such as course grades likely will not fully address the issue of measuring learning outcomes. Besides the need to be adjusted course grades to reflect differences across instructors (Arbaugh, 2005a), coursespecific grades generally cannot be generalized across disciplines. Therefore, it behooves scholars to seek cross-disciplinary and multi-dimensional objective outcome measures to supplement the perceptual measures that this field has relied upon to date (Arbaugh, Desai, et al., 2010; Armstrong & Fukami, 2010; Carrell, 2010).
CONCLUSION Multi-disciplinary and program-level research in online and blended business education has progressed substantially from its beginnings in the mid-1990s. Such studies carry the primary influence upon future research directions for the field. Multi-disciplinary studies are increasingly pointing to a particular type of collaboration – that
Multi-Disciplinary Studies in Online Business Education
between students and the instructor – as being particularly predictive of outcomes in online business education. Although this certainly supports the design framework for “hard” disciplines presented in this chapter, it also suggests that the “co-creation” model often advocated in the mainstream online learning literature may not be as fully applicable even in the “softer” business disciplines. In short, business education instructors may need to be both “sages on the stage” and “guides by the side” (Arbaugh, 2010b). Therefore, in addition to looking to the instructor for course leadership as a content expert (Arbaugh & Hwang, 2006; Liu et al., 2005), students also may be looking to them for leadership in navigating the “hidden curriculum” of online learning (Anderson, 2002). In spite of these noteworthy contributions, there still remain numerous research questions and issues that will be best addressed by continuing emphasis on multi-disciplinary studies in business education. The potential influences of academic discipline, degree of blending, instructor characteristics, and program level on learning outcomes are but a few of the issue best addressed in multi-disciplinary settings. Therefore, the call for additional researchers presented in this chapter is both necessary and useful. With such a hoped-for increase in research activity, we hope that initiatives in business education will encourage online learning researchers in other disciplines to examine their work more from a curricular rather than exclusively from a course or disciplinary level.
REFERENCES Alavi, M. (1994). Computer-mediated collaborative learning: An empirical evaluation. Management Information Systems Quarterly, 18, 159–174. doi:10.2307/249763
Alavi, M., & Gallupe, R. B. (2003). Using information technology in learning: Case studies in business and management education programs. Academy of Management Learning & Education, 2, 139–153. Alavi, M., Marakas, G. M., & Yoo, Y. (2002). A comparative study of distributed learning environments on learning outcomes. Information Systems Research, 13, 404–415. doi:10.1287/ isre.13.4.404.72 Alexander, M. W., Perrault, H., Zhao, J. J., & Waldman, L. (2009). Comparing AACSB faculty and student online learning experiences: Changes between 2000 and 2006. Journal of Educators Online, 6(1). Retrieved February 1, 2009, from http://www.thejeo.com/Archives/ Volume6Number1/Alexanderetalpaper.pdf Allen, I. E., & Seaman, J. (2010). Learning on demand: Online education in the United States, 2009. Wellesley, MA: Babson Survey Research Group. Anderson, T. (2002). The hidden curriculum of distance education. Change, 33(6), 28–35. doi:10.1080/00091380109601824 Anstine, J., & Skidmore, M. (2005). A small sample study of traditional and online courses with sample selection adjustment. The Journal of Economic Education, 36, 107–127. Arbaugh, J. B. (2000a). Virtual classroom characteristics and student satisfaction in Internet-based MBA courses. Journal of Management Education, 24, 32–54. doi:10.1177/105256290002400104 Arbaugh, J. B. (2000b). How classroom environment and student engagement affect learning in Internet-based MBA courses. Business Communication Quarterly, 63(4), 9–26. doi:10.1177/108056990006300402
15
Multi-Disciplinary Studies in Online Business Education
Arbaugh, J. B. (2001). How instructor immediacy behaviors affect student satisfaction and learning in Web-based courses. Business Communication Quarterly, 64(4), 42–54. doi:10.1177/108056990106400405
Arbaugh, J. B. (2010c). Do undergraduates and MBAs differ online? Initial conclusions from the literature. Journal of Leadership & Organizational Studies, 17, 129–142. doi:10.1177/1548051810364989
Arbaugh, J. B. (2002). Managing the on-line classroom: A study of technological and behavioral characteristics of Web-based MBA courses. The Journal of High Technology Management Research, 13, 203–223. doi:10.1016/S10478310(02)00049-4
Arbaugh, J. B., Bangert, A., & Cleveland-Innes, M. (2010). Subject matter effects and the community of inquiry (CoI) framework: An exploratory study. The Internet and Higher Education, 13, 37–44. doi:10.1016/j.iheduc.2009.10.006
Arbaugh, J. B. (2004). Learning to learn online: A study of perceptual changes between multiple online course experiences. The Internet and Higher Education, 7(3), 169–182. doi:10.1016/j. iheduc.2004.06.001 Arbaugh, J. B. (2005a). How much does subject matter matter? A study of disciplinary effects in Web-based MBA courses. Academy of Management Learning & Education, 4, 57–73. Arbaugh, J. B. (2005b). Is there an optimal design for on-line MBA courses? Academy of Management Learning & Education, 4, 135–149. Arbaugh, J. B. (2008). Does the community of inquiry framework predict outcomes in online MBA courses? International Review of Research in Open and Distance Learning, 9, 1–21. Arbaugh, J. B. (2010a). Online and blended business education for the 21st century: Current research and future directions. Oxford, UK: Chandos Publishing. Arbaugh, J. B. (2010b). Sage, guide, both, or neither? An exploration of instructor roles in online MBA courses. Computers & Education, 55, 1234–1244. doi:10.1016/j.compedu.2010.05.020
16
Arbaugh, J. B., & Benbunan-Fich, R. (2006). An investigation of epistemological and social dimensions of teaching in online learning environments. Academy of Management Learning & Education, 5, 435–447. Arbaugh, J. B., & Benbunan-Fich, R. (2007). Examining the influence of participant interaction modes in Web-based learning environments. Decision Support Systems, 43, 853–865. doi:10.1016/j. dss.2006.12.013 Arbaugh, J. B., Desai, A. B., Rau, B. L., & Sridhar, B. S. (2010). A review of research on online and blended learning in the management discipline: 1994-2009. Organization Management Journal, 7(1), 39–55. doi:10.1057/omj.2010.5 Arbaugh, J. B., & Duray, R. (2002). Technological and structural characteristics, student learning and satisfaction with Web-based courses: An exploratory study of two MBA programs. Management Learning, 33, 231–247. doi:10.1177/1350507602333003 Arbaugh, J. B., Godfrey, M. R., Johnson, M., Leisen Pollack, B., Niendorf, B., & Wresch, W. (2009). Research in online and blended learning in the business disciplines: Key findings and possible future directions. The Internet and Higher Education, 12(2), 71–87. doi:10.1016/j. iheduc.2009.06.006
Multi-Disciplinary Studies in Online Business Education
Arbaugh, J. B., & Hornik, S. C. (2006). Do Chickering and Gamson’s seven principles also apply to online MBAs? Journal of Educators Online, 3(2). Retrieved September 1, 2006, from http:// www.thejeo.com/ Arbaugh, J. B., & Hwang, A. (2006). Does teaching presence exist in online MBA courses? The Internet and Higher Education, 9(1), 9–21. doi:10.1016/j.iheduc.2005.12.001 Arbaugh, J. B., & Rau, B. L. (2007). A study of disciplinary, structural, and behavioral effects on course outcomes in online MBA courses. Decision Sciences Journal of Innovative Education, 5(1), 65–95. doi:10.1111/j.1540-4609.2007.00128.x Armstrong, S. J., & Fukami, C. V. (2010). Selfassessment of knowledge: A cognitive or affective measure? Perspectives from the management learning and education community. Academy of Management Learning & Education, 9, 335–341. Becher, T. (1994). The significance of disciplinary differences. Studies in Higher Education, 19, 151–161. doi:10.1080/03075079412331382007 Becher, T., & Trowler, P. R. (2001). Academic tribes and territories (2nd ed.). Berkshire, UK: Society for Research into Higher Education & Open University Press. Benbunan-Fich, R., & Arbaugh, J. B. (2006). Separating the effects of knowledge construction and group collaboration in Web-based courses. Information & Management, 43, 778–793. doi:10.1016/j.im.2005.09.001 Benbunan-Fich, R., & Hiltz, S. R. (2003). Mediators of the effectiveness of online courses. IEEE Transactions on Professional Communication, 46(4), 298–312. doi:10.1109/TPC.2003.819639
Bernard, R. M., Abrami, P. C., Borokhovski, E., Wade, E., Tamim, R. M., Surkes, M. A., & Bethel, E. C. (2009). A meta-analysis of three types of interaction treatments in distance education. Review of Educational Research, 79, 1243–1289. doi:10.3102/0034654309333844 Bickel, R. (2007). Multilevel analysis for applied research. New York, NY: Guilford. Biglan, A. (1973). The characteristics of subject matter in different academic areas. The Journal of Applied Psychology, 57(3), 195–203. doi:10.1037/ h0034701 Bocchi, J., Eastman, J. K., & Swift, C. O. (2004). Retaining the online learner: Profile of students in an online MBA program and implications for teaching them. Journal of Education for Business, 79, 245–253. doi:10.3200/JOEB.79.4.245-253 Bonk, C. J., & Graham, C. R. (Eds.). (2006). The handbook of blended learning: Global perspectives, local designs. San Francisco, CA: Pfeiffer. Bryant, S. M., Kahle, J. B., & Schafer, B. A. (2005). Distance education: A review of the contemporary literature. Issues in Accounting Education, 20, 255–272. doi:10.2308/iace.2005.20.3.255 Carrell, L. J. (2010). Thanks for asking: A (redfaced?) response from communication. Academy of Management Learning & Education, 9, 300–304. Dacko, S. G. (2001). Narrowing skill development gaps in Marketing and MBA programs: The role of innovative technologies for distance learning. Journal of Marketing Education, 23, 228–239. doi:10.1177/0273475301233008 Davis, F. (1989). Perceived usefulness, perceived ease of use and user acceptance of information technology. Management Information Systems Quarterly, 13, 319–340. doi:10.2307/249008
17
Multi-Disciplinary Studies in Online Business Education
Davis, R., & Wong, D. (2007). Conceptualizing and measuring the optimal experience of the e-learning environment. Decision Sciences Journal of Innovative Education, 5, 97–126. doi:10.1111/j.1540-4609.2007.00129.x Daymont, T., & Blau, G. (2008). Student performance in online and traditional sections of an undergraduate management course. Journal of Behavioral and Applied Management, 9, 275–294. Dehler, G. E., Beatty, J. E., & Leigh, J. S. A. (2010). From good teaching to scholarly teaching: Legitimizing management education and learning scholarship. In Wankel, C., & DeFillippi, R. (Eds.), Being and becoming a management education scholar (pp. 95–118). Charlotte, NC: Information Age Publishing. Dempsey, J. V., & Van Eck, R. N. (2002). Instructional design online: Evolving expectations. In Reiser, R. A., & Dempsey, J. V. (Eds.), Trends and issues in instructional design and technology (pp. 281–294). Upper Saddle River, NJ: Merrill Prentice-Hall. Dillon, C. L., & Walsh, S. M. (1992). Faculty: The neglected resource in distance education. American Journal of Distance Education, 6(3), 5–21. doi:10.1080/08923649209526796 Drago, W., & Peltier, J. (2004). The effects of class size on the effectiveness of online courses. Management Research News, 27(10), 27–41. doi:10.1108/01409170410784310 Drago, W., Peltier, J., Hay, A., & Hodgkinson, M. (2005). Dispelling the myths of online education: Learning via the information superhighway. Management Research News, 28(6/7), 1–17. doi:10.1108/01409170510784904 Drago, W., Peltier, J., & Sorensen, D. (2002). Course content or the instructor: Which is more important in online teaching? Management Research News, 25(6/7), 69–83. doi:10.1108/01409170210783322
18
Dykman, C. A., & Davis, C. K. (2008a). Online education forum part two – teaching online versus teaching conventionally. Journal of Information Systems Education, 19, 157–164. Dykman, C. A., & Davis, C. K. (2008b). Online education forum part three – a quality online educational experience. Journal of Information Systems Education, 19, 281–289. Engwall, L. (2007). The anatomy of management education. Scandinavian Journal of Management, 23, 4–35. doi:10.1016/j.scaman.2006.12.003 Eom, S. B., Wen, H. J., & Ashill, N. (2006). The determinants of students’ perceived learning outcomes and satisfaction in university online education: An empirical investigation. Decision Sciences Journal of Innovative Education, 4(2), 215–235. doi:10.1111/j.1540-4609.2006.00114.x Garrison, D. R., Anderson, T., & Archer, W. (2000). Critical inquiry in a text-based environment: Computer conferencing in higher education. The Internet and Higher Education, 2, 87–105. doi:10.1016/S1096-7516(00)00016-6 Garrison, D. R., & Arbaugh, J. B. (2007). Researching the community of inquiry framework: Review, issues and future directions. The Internet and Higher Education, 10(3), 157–172. doi:10.1016/j.iheduc.2007.04.001 Garrison, D. R., & Kanuka, H. (2004). Blended learning: Uncovering its transformative potential in higher education. The Internet and Higher Education, 7, 95–105. doi:10.1016/j. iheduc.2004.02.001 Gibson, S. G., Harris, M. L., & Colaric, S. M. (2008). Technology acceptance in an academic context: Faculty acceptance of online education. Journal of Education for Business, 83, 355–359. doi:10.3200/JOEB.83.6.355-359
Multi-Disciplinary Studies in Online Business Education
Grandzol, J. R., & Grandzol, C. J. (2006). Best practices for online business education. International Review of Research in Open and Distance Learning, 7(1), 1–17. Gratton-Lavoie, C., & Stanley, D. (2009). Teaching and learning of principles of microeconomics online: An empirical assessment. The Journal of Economic Education, 40(2), 3–25. doi:10.3200/ JECE.40.1.003-025 Grzeda, M., & Miller, G. E. (2009). The effectiveness of an online MBA program in meeting midcareer student expectations. Journal of Educators Online, 6(2). Retrieved November 10, 2009, from http://www.thejeo.com/Archives/ Volume6Number2/GrzedaandMillerPaper.pdf Hativa, N., & Marincovich, M. (Eds.). (1995). New directions for teaching and learning - disciplinary differences in teaching and learning: Implications for practice. San Francisco, CA: Jossey-Bass. Heckman, R., & Annabi, H. (2006). How the teacher’s role changes in online case study discussions. Journal of Information Systems Education, 17, 141–150. Hwang, A., & Arbaugh, J. B. (2006). Virtual and traditional feedback-seeking behaviors: Underlying competitive attitudes and consequent grade performance. Decision Sciences Journal of Innovative Education, 4, 1–28. doi:10.1111/j.15404609.2006.00099.x Hwang, A., & Arbaugh, J. B. (2009). Seeking feedback in blended learning: Competitive versus cooperative student attitudes and their links to learning outcome. Journal of Computer Assisted Learning, 25, 280–293. doi:10.1111/j.13652729.2009.00311.x Hwang, A., & Francesco, A. M. (2010). The influence of individualism-collectivism and power distance on use of feedback channels and consequences for learning. Academy of Management Learning & Education, 9, 243–257.
Jonassen, D., Davidson, M., Collins, M., Campbell, J., & Haag, B. B. (1995). Constructivism and computer-mediated communication in distance education. American Journal of Distance Education, 9(2), 7–26. doi:10.1080/08923649509526885 Jonassen, D. H., Tessmer, M., & Hannum, W. H. (1999). Task analysis methods for instructional design. Mahwah, NJ: Erlbaum. Jones, R., Moeeni, F., & Ruby, P. (2005). Comparing Web-based content delivery and instructor-led learning in a telecommunications course. Journal of Information Systems Education, 16, 265–271. Julian, S. D., & Ofori-Dankwa, J. C. (2006). Is accreditation good for the strategic decision making of traditional business schools? Academy of Management Learning & Education, 5, 225–233. Kellogg, D. L., & Smith, M. A. (2009). Studentto-student interaction revisited: A case study of working adult business students in online courses. Decision Sciences Journal of Innovative Education, 7, 433–456. doi:10.1111/j.15404609.2009.00224.x Khurana, R. (2007). From higher aims to hired hands: The social transformation of American business schools and the unfulfilled promise of management as a profession. Princeton, NJ: Princeton University Press. Kim, K.-J., Liu, S., & Bonk, C. J. (2005). Online MBA students’ perceptions of online learning: Benefits, challenges and suggestions. The Internet and Higher Education, 8, 335–344. doi:10.1016/j. iheduc.2005.09.005 Klein, H. J., Noe, R. A., & Wang, C. (2006). Motivation to learn and course outcomes: The impact of delivery mode, learning goal orientation, and perceived barriers and enablers. Personnel Psychology, 59, 665–702. doi:10.1111/j.17446570.2006.00050.x
19
Multi-Disciplinary Studies in Online Business Education
Kuhn, T. S. (1970). The structure of scientific revolutions (2nd ed.). Chicago, IL: University of Chicago Press. Landry, B. J. L., Griffeth, R., & Hartman, S. (2006). Measuring student perceptions of blackboard using the technology acceptance model. Decision Sciences Journal of Innovative Education, 4, 87–99. doi:10.1111/j.1540-4609.2006.00103.x Lattuca, L. R., & Stark, J. S. (1994). Will disciplinary perspectives impede curricular reform? The Journal of Higher Education, 65, 401–426. doi:10.2307/2943853 Leidner, D. E., & Jarvenpaa, S. L. (1995). The use of information technology to enhance management school education: A theoretical view. Management Information Systems Quarterly, 19, 265–291. doi:10.2307/249596 Liu, X., Magjuka, R. J., & Lee, S. 2006. An empirical examination of sense of community and its effects on students’ satisfaction, perceived learning outcome, and learning engagement in online MBA courses. International Journal of Instructional Technology & Distance Learning, 3(7). Retrieved September 1, 2006, from http:// www.itdl.org/Journal/Jul_06/article01.htm Lohdahl, J. B., & Gordon, G. (1972). The structure of scientific fields and the functioning of university graduate departments. American Sociological Review, 37, 57–72. doi:10.2307/2093493 Marks, R. B., Sibley, S., & Arbaugh, J. B. (2005). A structural equation model of predictors for effective online learning. Journal of Management Education, 29, 531–563. doi:10.1177/1052562904271199 Martins, L. L., & Kellermans, F. W. (2004). A model of business school students’ acceptance of a Web-based course management system. Academy of Management Learning & Education, 3, 7–26.
20
Merrill, M. D. (2001). Components of instruction toward a theoretical tool of instructional design. Instructional Science, 29, 291–310. doi:10.1023/A:1011943808888 Millson, M. R., & Wilemon, D. (2008). Educational quality correlates of online graduate management education. Journal of Distance Education, 22(3), 1–18. Murray, H. G., & Renaud, R. D. (1995). Disciplinary differences in teaching and learning: Implications for practice. New Directions for Teaching and Learning, 64, 31–39. doi:10.1002/ tl.37219956406 Navarro, P. (2008). The core curricula of topranked U.S. business schools: A study in failure? Academy of Management Learning & Education, 7, 108–123. Nemanich, L., Banks, M., & Vera, D. (2009). Enhancing knowledge transfer in classroom versus online settings: The interplay among instructor, student, content, and context. Decision Sciences Journal of Innovative Education, 7, 123–148. doi:10.1111/j.1540-4609.2008.00208.x Neumann, R. (2001). Disciplinary differences and university teaching. Studies in Higher Education, 26, 135–146. doi:10.1080/03075070120052071 Neumann, R., Parry, S., & Becher, T. (2002). Teaching and learning in their disciplinary contexts: A conceptual analysis. Studies in Higher Education, 27, 405–417. doi:10.1080/0307507022000011525 O’Toole, J. (2009). The pluralistic future of management education. In Armstrong, S. J., & Fukami, C. V. (Eds.), The SAGE handbook of management learning, education, and development (pp. 547–558). London, UK: SAGE Publications. Ozdemir, Z. D., Altinkemer, K., & Barron, J. M. (2008). Adoption of technology-mediated learning in the U.S. Decision Support Systems, 45, 324–337. doi:10.1016/j.dss.2008.01.001
Multi-Disciplinary Studies in Online Business Education
Parthasurathy, M., & Smith, M. A. 2009. Valuing the institution: An expanded list of factors influencing faculty adoption of online education. Online Journal of Distance Learning Administration, 12(2). Retrieved October 15, 2009, from http:// www.westga.edu/~distance/ ojdla/summer122/ parthasarathy122.html Peltier, J. W., Drago, W., & Schibrowsky, J. A. (2003). Virtual communities and the assessment of online marketing education. Journal of Marketing Education, 25, 260–276. doi:10.1177/0273475303257762 Peltier, J. W., Schibrowsky, J. A., & Drago, W. (2007). The interdependence of the factors influencing the perceived quality of the online learning experience: A causal model. Journal of Marketing Education, 29, 140–153. doi:10.1177/0273475307302016 Perreault, H., Waldman, L., Alexander, M., & Zhao, J. (2002). Overcoming barriers to successful delivery of distance-learning courses. Journal of Education for Business, 77, 313–318. doi:10.1080/08832320209599681 Popovich, C. J., & Neel, R. E. (2005). Characteristics of distance education programs at accredited business schools. American Journal of Distance Education, 19, 229–240. doi:10.1207/ s15389286ajde1904_4 Reigeluth, C. M. (Ed.). (1983). Instructional design theories and models: An overview of their current status. Hillsdale, NJ: Erlbaum. Reiser, R. A., & Gagne, R. M. (1983). Selecting media for instruction. Englewood Cliffs, NJ: Instructional Technology. Rubin, R. S., & Dierdorff, E. C. (2009). How relevant is the MBA? Assessing the alignment of required curricula and required managerial competencies. Academy of Management Learning & Education, 8, 208–224.
Rungtusanatham, M., Ellram, L. M., Siferd, S. P., & Salik, S. (2004). Toward a typology of business education in the Internet age. Decision Sciences Journal of Innovative Education, 2, 101–120. doi:10.1111/j.1540-4609.2004.00040.x Saade, R. G., Tan, W., & Nebebe, F. (2008). Impact of motivation on intentions in online learning: Canada vs. China. Issues in Informing Science and Information Technology, 5, 137–147. Shea, P. (2007). Bridges and barriers to teaching online college courses: A study of experienced online faculty in thirty-six colleges. Journal of Asynchronous Learning Networks, 11(2), 73–128. Shulman, L. S. (1987). Knowledge and teaching: Foundations of the new reform. Harvard Educational Review, 57(1), 1–22. Shulman, L. S. (2005). Signature pedagogies in the professions. Daedalus, 134(3), 52–59. doi:10.1162/0011526054622015 Simmering, M. J., Posey, C., & Piccoli, G. (2009). Computer self-efficacy and motivation to learn in a self-directed online course. Decision Sciences Journal of Innovative Education, 7, 99–121. doi:10.1111/j.1540-4609.2008.00207.x Sitzmann, T., Ely, K., Brown, K. G., & Bauer, K. N. (2010). Self-assessment of knowledge: A cognitive learning or affective measure? Academy of Management Learning & Education, 9, 169–191. Sitzmann, T., Kraiger, K., Stewart, D., & Wisher, R. (2006). The comparative effectiveness of Web-based and classroom instruction: A metaanalysis. Personnel Psychology, 59, 623–664. doi:10.1111/j.1744-6570.2006.00049.x Smart, J. C., & Ethington, C. A. (1995). Disciplinary and institutional differences in undergraduate education goals: Implications for practice. New Directions for Teaching and Learning, 64, 49–57. doi:10.1002/tl.37219956408
21
Multi-Disciplinary Studies in Online Business Education
Smeby, J.-C. (1996). Disciplinary differences in university teaching. Studies in Higher Education, 21(1), 69–79. doi:10.1080/030750796123 31381467 Smith, G. G., Heindel, A. J., & Torres-Ayala, A. T. (2008). E-learning commodity or community: Disciplinary differences between online courses. The Internet and Higher Education, 11, 152–159. doi:10.1016/j.iheduc.2008.06.008 Stoel, L., & Lee, K. H. (2003). Modeling the effect of experience on student acceptance of Web-based course software. Internet Research: Electronic Networking Applications and Policy, 13, 364–374. doi:10.1108/10662240310501649 Terry, N. (2001). Assessing enrollment and attrition rates for the online MBA. T.H.E. Journal, 28(7), 64–68. Thompson, J. D., Hawkes, R. W., & Avery, R. W. (1969). Truth strategies and university organization. Educational Administration Quarterly, 5(2), 4–25. doi:10.1177/0013131X6900500202 Trank, C. Q., & Rynes, S. L. (2003). Who moved our cheese? Reclaiming professionalism in business education. Academy of Management Learning & Education, 2, 189–205. Van Patten, J., Chao, C., & Riegeluth, C. (1986). A review of strategies for sequencing and synthesizing instruction. Review of Educational Research, 56, 437–471. doi:10.3102/00346543056004437 Venkatesh, V., & Davis, F. D. (2000). Theoretical extension of the technology acceptance model: Four longitudinal field studies. Management Science, 46, 186–204. doi:10.1287/ mnsc.46.2.186.11926 Wan, Z., Fang, Y., & Neufeld, D. J. (2007). The role of information technology in technologymediated learning: A review of the past for the future. Journal of Information Systems Education, 18, 183–192.
22
Whetten, D. A., Johnson, T. D., & Sorenson, D. L. (2009). Learning-centered course design. In Armstrong, S. J., & Fukami, C. V. (Eds.), The SAGE handbook of management learning, education, and development (pp. 254–270). London, UK: Sage. Williams, E. A., Duray, R., & Reddy, V. (2006). Teamwork orientation, group cohesiveness, and student learning: A study of the use of teams in online distance education. Journal of Management Education, 30, 592–616. doi:10.1177/1052562905276740
KEY TERMS AND DEFINITIONS “Applied” Disciplines: Academic disciplines where the primary focus is the application of knowledge. “Hard” Disciplines: Academic disciplines where general consensus has emerged amongst scholars regarding the field’s dominant paradigms. “Life” Disciplines: Academic disciplines that study living things. Multi-Disciplinary Studies: Research studies that are comprised of samples that include courses from more than one academic discipline. “Non-Life” Disciplines: Academic disciplines that study non- living things. “Pure” Disciplines: Academic disciplines where the primary focus is the creation and acquisition of knowledge. “Soft” Disciplines: Academic disciplines characterized by multiple competing paradigms as possible explanations of their phenomena of interest.
23
Chapter 2
Learning and Satisfaction in Online Communities of Inquiry Zehra Akyol Canada D. Randy Garrison University of Calgary, Canada
ABSTRACT The purpose of this chapter is to explain the capability of the Community of Inquiry (CoI) framework as a research model to study student learning and satisfaction. The framework identifies three elements (social, cognitive, and teaching presence) that contribute directly to the success of an e-learning experience through the development of an effective CoI. It is argued that a CoI leads to higher learning and increased satisfaction. The chapter presents findings from two online courses designed using the CoI approach. Overall, the students in these courses had high levels of perceived learning and satisfaction, as well as actual learning outcomes.
INTRODUCTION Online learning has reached a point where it has been accepted as an important alternative or enhancement to traditional face-to-face education. Changing needs and expectations of 21th century students and the advances in communication technologies are the main reasons for this development. However, there are still concerns about the quality of online learning programs, which raises the
question of how to evaluate the success of online learning. The literature points out two variables that have been studied extensively: learning and satisfaction. In order to increase the effectiveness of online learning programs, researchers have been exploring factors and issues affecting students’ learning and satisfaction in online environments as well as developing and applying strategies and theories to enhance their learning and satisfaction. In this chapter, an overview of the CoI framework as one promising theory to achieve higher levels
DOI: 10.4018/978-1-60960-615-2.ch002
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
Learning and Satisfaction in Online Communities of Inquiry
of learning and satisfaction is introduced along with supporting research.
BACKGROUND An important line of research regarding learning online has been the exploration of the challenges and factors affecting the success of students’ learning experiences. For example, Mingming and Evelyn (1999) found eleven factors significantly related to students’ perceived learning: • • • • • • • • • • •
instructor-student interaction, instructor-student communication, instructor evaluation, instructor responses, student-student interaction, student-student communication, online discussion, written assignments, learning style, prior computer competency, and time spent on a course.
However, the most influential factors were students’perceived interaction with their instructor followed by online discussion. Similarly, Eom, Wen and Ashill (2006) examined several factors, from course structure to self motivation, as potential determinants of perceived learning outcomes and satisfaction in asynchronous online learning courses. The results showed that only two of them, learning style and instructor feedback, affect perceived learning outcomes. In terms of satisfaction of an online learning experience, however, there is less consensus. Researchers have identified a wide range of variables associated with satisfaction (Lin & Overbaugh, 2007; Martz, Reddy & Sangermano, 2004; Sahin, 2007; Sun, Tsai, Finger, Chen & Yeh, 2008). The common theme is that instructor support and interaction contribute significantly to learner satisfaction Similarly, it has been shown that small
24
group interaction (Driver, 2002) or collaborative interaction (Jung, Choi, Lim & Leem, 2002; So & Brush, 2008) created higher levels of social presence and satisfaction. Researchers have also begun to investigate the relationship between students’ perceived learning and satisfaction and a sense of community. This coincides with an increasing emphasis on community building in online learning environments. The research of Rovai (2002) provided evidence for the relationship between sense of community and perceived learning. He concluded, online learners who have a stronger sense of community and perceived learning feel less isolated and have greater satisfaction with their academic programs. In turn, Rovai found that students felt less isolated which resulted in fewer dropouts. Parenthetically, this link between satisfaction and retention was also found by Schreiner (2009). Harvey, Moller, Huett, Godshalk and Downs (2007) also investigated whether a stronger sense of community would lead to increased learning and productivity in asynchronous environments. They found that more peer interactions, as expressed by community comments, resulted in higher learning as evidenced by higher grades. In other words, learning occurred within the teams as they worked together to complete their projects. Many other studies have also confirmed the impact of community on students’ learning and satisfaction in online environments (e.g., Ertmer & Stepich, 2004; Shea, 2006; Shea, Li, & Pickett, 2006; Liu, Magjuka, Bonk & Lee, 2007). Considering all the previous studies, the evidence suggests that a community of inquiry approach may lead to higher levels of learning and satisfaction. This is reinforced by Palloff and Pratt (2005) who indicate that creating and sustaining a community for online learning enhances student satisfaction and learning through community involvement. The potential of the Community of Inquiry (CoI) framework developed by Garrison, Anderson and Archer (2000) is derived from its ability to provide a comprehensive look at how
Learning and Satisfaction in Online Communities of Inquiry
learning occurs in online learning contexts. Most of the above variables influencing learning and satisfaction are taken into consideration by the elements of a community of inquiry. Garrison and Cleveland-Innes (2004) claim that when all these three elements of a learning community (social, cognitive, and teaching presence) are integrated harmoniously in a way that supports critical discourse and reflection, then satisfaction and success result. Other studies using the CoI framework also provide evidence for the impact of social, cognitive and teaching presence on learning and satisfaction (e.g., Richardson & Swan, 2003; Shea, Pickett & Pelz, 2003, 2004; Akyol & Garrison, 2008, 2011; Shea & Bidjerano, 2009). The purpose of this chapter is to explore the inherent capability of the CoI framework to enhance students’ learning and satisfaction by focusing on the role of each element (social, cognitive and teaching presence). In arguing that developing an effective community of inquiry leads to higher learning and increased satisfaction, the findings from two online courses designed by applying the CoI framework will be presented.
COMMUNITY OF INQUIRY FRAMEWORK An online learning community is valuable as it serves social needs as well as enhancing student satisfaction and learning through community involvement (Palloff & Pratt, 2005). The CoI framework embraces collaborative constructivist approaches as the basis of inquiry. The framework is comprised of three elements essential to this purpose: social presence, cognitive presence and teaching presence. All three elements together construct a purposeful discourse and reflection (Garrison & Vaughan, 2008). The role of each presence to achieve high levels of learning and satisfaction are discussed next.
Social Presence Garrison (2009) defines social presence as “the ability of participants to identify with the community (e.g., course of study), communicate purposefully in a trusting environment, and develop inter-personal relationships by way of projecting their individual personalities” (p. 352). The role of social presence to establish relationships and a sense of belonging in order to create the climate for open communication and support critical thinking in a community of learners (Garrison & Anderson, 2003). The research of Shea and Bidjerano (2009) also suggests that it is crucial to assist learners to gain comfort and confidence in the online discussion format in order to foster cognitive activity. Similarly, Harvey et al. (2007) emphasizes social aspect of asynchronous communication for both the quality of learning experience and the quality of group work. The students in their study expressed building their self confidence through independent and collaborative research and being proud of their team’s efforts and outcomes resulted in a sense of satisfaction among members of the learning communities. Moreover, they also expressed their desire for more collaborative work (Harvey, et al, 2007). Many other studies also provided evidence for the relationship between social presence, and learning and/or satisfaction (e.g. Gunawardena & Zittle, 1997; Picciano, 2002; Tu & McIsaac, 2002; Richardson & Swan, 2003; Swan & Shih, 2005; Akyol & Garrison, 2008, 2011; Boston, Diaz, Gibson, Ice, Richardson & Swan, 2009). The more students feel comfortable and a sense of belonging to a group, the higher levels of satisfaction and learning are expected. In this regard, instructional design in supporting the development of social presence is crucial (Swan & Shih, 2005; Shea and Bidjerano, 2009). The CoI framework helps instructional designers and online instructors by illuminating the way social presence develops and how it interacts with the other presences. Social presence in an online learning environment usually starts with more
25
Learning and Satisfaction in Online Communities of Inquiry
affective expression (i.e., self disclosure) and evolves to group cohesion through continuous open communication (Akyol & Garrison, 2008).
Teaching Presence Instructors play key roles in students’ learning in either traditional face-to-face or in online learning environments. However, in online learning students may need to feel more instructional guidance by the instructor (Akyol, Garrison, Ozden, 2009). In a recent study conducted by Paechter, Maier and Macher (2010) it was found that students experience the instructor’s support and expertise as being especially important for the acquisition of knowledge and skills, and for course satisfaction in online learning. Anderson (2004) defines the qualities that define an excellent online learning teacher. First, he proposes having sufficient technical skills to navigate and contribute effectively within the online learning context. Next, he emphasizes developing a sense of trust and safety so that learners will not feel uncomfortable and constrained in postings their thoughts and comments. Finally, an effective online learning teacher must have resilience, innovativeness, and perseverance. It is clear that teaching online represents a new challenge that requires a new set of responsibilities and roles. The teaching presence construct defined within the context of the CoI framework speaks to these qualities. Teaching presence addresses the design, facilitation and direction of cognitive and social processes to support and enhance a meaningful learning experience. Teaching presence includes the possibility that any participant could assume the associated responsibilities. As such, Garrison and Anderson (2003) emphasize sharing the roles and responsibilities of a teacher among students. This has been supported by the students in studies by Rourke and Anderson (2002) and Akyol, Garrison and Ozden (2009). Teaching presence has a regulatory and mediating role to create an effective community of in-
26
quiry by balancing social and cognitive processes congruent with the intended learning outcomes and the needs and capabilities of the learners (Garrison & Anderson, 2003). This critical role has been confirmed by other research (e.g. Shea et al., 2006; Book & Oliver, 2007). It has been found that instructors who develop strong practices in terms of establishing reason and context for communication, enabling communication, supporting communication and moderating communication are likely to support community development (Book & Oliver, 2007). Overall, students value frequent feedback from the instructor and find it important to improve the quality of online learning. In addition to the relationship between sense of community and teaching presence, the relationship between teaching presence and students’ learning and satisfaction was also evidenced in the literature (Baker, 2004; Shea, Pickett and Pelz, 2003, 2004; Akyol & Garrison, 2008).
Cognitive Presence The ultimate purpose of an educational community of inquiry is to create an intellectual environment that supports sustained critical discourse and higher order knowledge acquisition and application (Garrison & Anderson, 2003). To a large degree, social and teaching presence facilitate the creation of a community for the purpose of sustaining cognitive presence through practical inquiry. As Harvey et al. (2007) indicated, it difficult to imagine that learners would engage in substantive and rich conversations without the feelings of acceptance that a community provides. The community provides the required emotional and leadership support through social and teaching presence in order to develop high levels of cognitive presence. The CoI framework is a process model. This is perhaps best reflected within the cognitive presence element. Cognitive presence is comprised of the progressive phases of practical inquiry leading to resolution of a problem or dilemma. It is
Learning and Satisfaction in Online Communities of Inquiry
developmental in nature starting with triggering event and aiming to reach resolution. In order to achieve deep and meaningful learning, it is important to engage learners in the inquiry process. Cognitive presence is described as a condition of higher-order thinking and learning (Garrison & Anderson, 2003). Akyol and Garrison (2011) revealed this close relationship when they found high levels of cognition (i.e. integration) as well as high levels of perceived learning, satisfaction and actual grades. For students to be able to engage in high levels of cognitive inquiry requires skillful marshalling of teaching and social presence (Shea & Bidjerano, 2009). In order to develop an effective community of inquiry in an online learning environment, the integration of the elements should be designed, facilitated and directed based on the purpose, participants and technological context of the learning experience (Akyol & Garrison, 2008). Establishing social presence is one of the first and most important challenges for instructors and instructional designers as it is a precondition to establishing collaborative learning experience. Keeping this in mind, special efforts must be made to allow participants to introduce themselves in the first session and through the use of chat rooms, collaborative assignments and discourse social presence can be sustained over time (Garrison & Anderson, 2003). When group cohesion and trust are strong, the transition through the phases of cognitive presence will also be easier. Garrison and Anderson (2003) also suggest division of the group into smaller groups for discussion to support cognitive presence and social presence. Learning activities and assessment should be congruent with the learning outcomes to enhance cognitive presence. Most of these practical guidelines are associated with the design and organization aspect of teaching presence. Throughout the course, facilitating discourse and direct instruction aspects of teaching presence can be shared between the instructor and students depending on the level of students. This strategy encourages students to
monitor and manage their learning by increasing their metacognitive awareness and learning how to learn (Garrison & Anderson, 2003). The value of the CoI framework to study perceived learning and satisfaction is demonstrated in the study described next.
RESEARCH In this section, we present the results of a study that investigated students learning and satisfaction. The study used the CoI framework to design and develop two online courses.
Methodology The purpose of this research was to examine students’ learning and satisfaction level in online communities of inquiry. The main research question leading this study was whether the community of inquiry approach can create an effective online learning environment that supports high levels of learning and satisfaction. The context of the research was a graduate level online course given in fall and spring terms at a large research based university. In both courses, learning activities, strategies and assessment techniques were all developed to reflect social, cognitive and teaching presence. Both the instructor and the topic of the courses were the same – only the duration of the two courses were different. The major assignments in both courses were article critiques and peer reviews, weekly online discussions, and prototype course redesign projects. The instructor modeled how to facilitate the discussions in the first online discussion and in the remaining weeks it was the students who facilitated and directed both to take more responsibility of their learning and to distribute teaching presence among the instructor and the students. The instructor’s modeling of effective facilitation also encouraged the development of cognitive presence and social presence in both
27
Learning and Satisfaction in Online Communities of Inquiry
Table 1. The indicators of categories of the CoI elements Categories Social Presence
Open Communication Group Cohesion Personal/Affective
Learning climate/risk-free expression Group identity/collaboration Self projection/expressing emotions
Teaching Presence
Design & Organization Facilitating Discourse Direct Instruction
Setting curriculum & methods Shaping constructive exchange Focusing and resolving issues
Cognitive Presence
Triggering Event Exploration Integration Resolution
Sense of puzzlement Information exchange Connecting ideas Appling new ideas
online courses. Examples of creating social presence were social presence was created by a warm welcome by the instructor in the first synchronous meeting (through Elluminate) and reinforced via students’ home pages and collaborative activities throughout the course. Cognitive presence was created and sustained when students felt comfortable to express and share their ideas in order to construct the knowledge and skills needed to apply for their article critique assignment and course redesign prototype project. There were 36 (12 males and 24 females) students enrolled in both online courses. The data to explore learning and satisfaction in a community of inquiry developed in each online course was obtained through transcript analysis of online discussions and the CoI Survey (Arbaugh, ClevelandInnes, Diaz, Garrison, Ice, Richardson, Shea, & Swan, 2008). Transcript analysis was applied in order to code and explore posting patterns of social presence, teaching presence and cognitive presence based on category indicators defined in the CoI framework (Garrison & Anderson, 2003). Social presence was analyzed in the transcripts by coding for affective expression, open communication and group cohesion. Teaching presence was coded for design and organization, facilitating discourse, and direct instruction. Cognitive presence was coded using the indicators of the four phases of the Practical Inquiry model: triggering
28
Indicators
event, exploration, integration and resolution (See Table 1). Totally, eight weeks of discussions in the fall course and four weeks of discussions in the spring course were analyzed. The first author with a research assistant conducted the transcript analysis of the fall term course after getting training and pilot coding. Initial inter-rater reliability was.75 for pilot coding. The researchers coded transcripts separately and then actively discussed coding differences in order to reach a consensus. This negotiated coding strategy increased reliability by allowing refinement of coding scheme and controls for simple error (Garrison, ClevelandInnes, Koole & Kappelman, 2006). One hundred percent agreement was reached on each online discussion in fall term course. After gaining experience with the discussions in the fall course, the discussions in the spring course were analyzed by the first author. The Community of Inquiry (CoI) survey was administered at the end of each course to assess students’ perceptions of each constituting element (presences) of the CoI framework as well as their perceived learning and satisfaction. The Cronbach’s Alpha was found 0.94 for teaching presence, 0.91 for social presence, and 0.95 for cognitive presence (Arbaugh, et al., 2008). Thirty students (15 from each course) completed the survey. In addition to students’ self-report of learning and satisfaction, their final grades were also used in the
Learning and Satisfaction in Online Communities of Inquiry
Table 2. The percentages of the messages that includes the indicators of CoI elements Social Presence
Teaching Presence
Cognitive Presence
Fall term course
94.1%
53.8%
90.2%
Spring term course
90.2%
57.1%
79.4%
research to provide a better view of learning and satisfaction in the online communities of inquiry. The final grades were comprised of 25% article critique assignment, 25% online discussion activity and 50% course redesign prototype project.
Findings The results of the transcript analysis indicated that a community of inquiry developed in each course. Table 2 shows the percentages of the messages that included the indicators of social presence, teaching presence and cognitive presence. This result could be attributed to the success of instructional design that the CoI approach applied in both courses to design the instructional strategies, methods, and learning activities supported the development of social presence, teaching presence and cognitive presence. Further analysis was conducted to see whether there were developmental differences between the courses due to the course duration difference by Akyol, Vaughan and Garrison (in press). It was found that there were significant differences on categories of each presence generally favoring the course with a longer duration (i.e., takes time to reach higher levels of inquiry). However, this is not the focus of this paper. The focus here was on whether a community of inquiry developed in each course, which was confirmed by the transcript analysis results summarized in Table 2. The primary question here is the perceived level of learning and satisfaction developed in each course. The descriptive analysis of the CoI Survey also showed that students’ perceptions of each presence were high in both courses, confirm-
ing the transcript analysis that students could sense each element of the CoI (see Table 3). Transcript analysis also yielded that some of the categories of CoI elements were found most frequently than the others in both courses. The open communication category of social presence (occurring as continuing a thread, quoting from and referring to other’s messages, asking questions, complimenting, or expressing agreement/disagreement) was coded highest in both courses. In terms of cognitive presence the integration phase was the most frequently reached cognitive level in both courses, which is contrary to most previous studies (e.g. Garrison, Anderson & Archer, 2000; McKlin, Harmon, Evans & Jone, 2002; Meyer, 2003; Pawan, Paulus, Yalcin & Chang, 2003; Vaughan & Garrison, 2005; Kanuka, Rourke & Laflamme, 2007). The frequency of teaching presence categories created a difference between the two courses. In the fall term, the online course direct instruction category was coded most frequently whereas facilitating discourse was found as the highest in the spring term online course. This difference could be attributed to the difference on course duration that students in the spring term online course might have needed more time to perform direct instruction activities such as sharing and injecting knowledge from diverse sources. In both courses, students’ perceived learning and satisfaction were also found high indicating that students agreed that they could learn a lot in the course and they were satisfied with the course. Consistent with a high perception of learning, students’ grades were also high in both online courses. Students’ final grades in both courses
29
Learning and Satisfaction in Online Communities of Inquiry
Table 3. Students’ perceptions of each presence and learning and satisfaction in online courses Social Presence
Teaching Presence
Cognitive Presence
Perceived Learning
Satisfaction
Fall term course
3.94
4.15
4.07
4.2
4.47
Spring term course
4.06
4.63
4.24
4.67
4.87
were essentially identical. The means were 94.2 for the fall term online course and 92.1 for the spring term online course. This also indicates that students successfully completed their article critique assignments, attended online discussions and applied their course redesign ideas and the knowledge they gained into their projects. Overall, these results suggest that a community of inquiry is associated with the level of students’ perceived learning and satisfaction. As Akyol and Garrison (2011) indicate, the strength of the CoI framework is its emphasis on collaborative constructivist approaches for designing online learning environments to achieve deep and meaningful learning. The study of Benbunan-Fich and Arbaugh (2006) also confirmed the ascendency of collaborative constructivist approaches. The authors found evidence to suggest that group collaboration or knowledge construction can potentially improve students’ perceived learning and final grades. The findings here also provide evidence for the importance of a community of inquiry to achieve student satisfaction and learning outcomes in an e-learning environment (Akyol & Garrison, 2011).
FUTURE RESEARCH DIRECTIONS Notwithstanding the considerable research that has reported a relationship between satisfaction and perceived learning, there is a need for additional research that focuses on large scale studies conducted within a comprehensive theoretical framework. The framework that has shown considerable promise in exploring the complexities
30
of e-learning is the CoI framework (Garrison & Arbaugh, 2007). Perhaps the first challenge is to better understand the role that a sense of community plays in student satisfaction and quality of learning outcomes. Another topic worthy of further research is understanding the dynamics of a community of inquiry that engage learners and ensure that they achieve intended learning outcomes (Akyol & Garrison, 2008). With regard to perceived and actual learning, much more research needs to be done on cognitive presence and understanding the pedagogical challenges of ensuring learners complete the inquiry cycle. The role of meta-cognition must also play an important role in the inquiry process and should be studied. Finally, it has been shown that social presence plays an important mediating function between teaching and cognitive presence (Garrison, Cleveland-Innes & Fung, 2010; Shea & Bidjerano, 2009). More research is needed to fully understand the nature of this relationship and importance of social presence in a community of inquiry. In terms of using the CoI framework to conduct large scale studies, a group of researchers have developed an instrument reported previously that can be administered to efficiently and validly measure each of the presences of a community of inquiry (Arbaugh et al., 2008). This instrument can be used to design large scale studies to explore a wide range of topics relevant to e-learning across institutions and disciplines. These studies will be essential if we are to understand the factors that contribute to the success of online course delivery systems that will guide institutions in the adoption of e-learning approaches.
Learning and Satisfaction in Online Communities of Inquiry
CONCLUSION The main emphasis of the CoI framework is to create an effective community that enhances and supports learning and satisfaction. Building a learning community is valuable as it serves social needs as well as enhancing student satisfaction and learning through community involvement (Palloff & Pratt, 2005). Previous studies confirmed the effectiveness of the CoI framework to develop productive learning communities (Akyol & Garrison, 2008, 2009, 2011; Vaughan & Garrison, 2005). Shea and Bidjerano also (2009) emphasize that epistemic engagement where students are collaborative knowledge builders is well articulated and extended through the CoI framework. We suggest that instructional designers and instructors can apply the CoI framework and approach to designing effective online learning environments for increased learning and satisfaction. However, one main consideration that should be taken into account is that all three presences are interrelated and the establishment of one presence contributes to the establishment of the other presences (Shea & Bidjerano, 2009; Akyol & Garrison, 2008). Therefore, it is crucial that all the presences are considered in concert and in balance to support a collaborative community of inquiry.
REFERENCES Akyol, Z., & Garrison, D. R. (2008). The development of a community of inquiry over time in an online course: Understanding the progression and integration of social, cognitive and teaching presence. Journal of Asynchronous Learning Networks, 12(3), 3–22.
Akyol, Z., & Garrison, D. R. (2011). Understanding cognitive presence in an online and blended community of inquiry: Assessing outcomes and processes for deep approaches to learning. British Journal of Educational Technology, 42(2), 233–250. doi:10.1111/j.1467-8535.2009.01029.x Akyol, Z., Garrison, D. R., & Ozden, M. Y. (2009). Online and blended communities of inquiry: Exploring the developmental and perceptional differences. [IRRODL]. International Review of Research in Open and Distance Learning, 10(6), 65–83. Akyol, Z., Vaughan, N., & Garrison, D. R. (in press). The impact of course duration on the development of a community of inquiry. Interactive Learning Environments. Anderson, T. (2004). Teaching in an online learning context. In T. Anderson & F. Elloumi, (Eds.), Theory & practice of online learning (pp. 173–194). Retrieved January 10, 2010, from http:// cde.athabascau.ca/online_book/contents.html Arbaugh, J. B., Cleveland-Innes, M., Diaz, S., Garrison, D. R., Ice, P., & Richardson, J. (2008). Developing a community of inquiry instrument: Testing a measure of the Community of Inquiry framework using a multi-institutional sample. The Internet and Higher Education, 11, 133–136. doi:10.1016/j.iheduc.2008.06.003 Baker, J. D. (2004). An investigation of relationships among instructor immediacy and affective and cognitive learning in the online classroom. The Internet and Higher Education, 7(1), 1–13. doi:10.1016/j.iheduc.2003.11.006 Benbunan-Fich, R., & Arbaugh, J. B. (2006). Separating the effects of knowledge construction and group collaboration in learning outcomes of Web-based courses. Information & Management, 43(6), 778–793. doi:10.1016/j.im.2005.09.001
31
Learning and Satisfaction in Online Communities of Inquiry
Boston, W., Diaz, S. R., Gibson, A. M., Ice, P., Richardson, J., & Swan, K. (2009). An exploration of the relationship between indicators of the community of inquiry framework and retention in online programs. Journal of Asynchronous Learning Networks, 13(3), 67–83. Brook, C., & Oliver, R. (2007). Exploring the influence of instructor actions on community development in online settings. In Lambropoulos, N., & Zaphiris, P. (Eds.), User-centered design of online learning communities. Hershey, PA: Idea Group. Driver, M. (2002). Exploring student perceptions of group interaction and class satisfaction in the Web-enhanced classroom. The Internet and Higher Education, 5(1), 35–45. doi:10.1016/ S1096-7516(01)00076-8 Eom, S. B., Wen, H. J., & Ashill, N. (2006). The determinants of students’ perceived learning outcomes and satisfaction in university online education: An empirical investigation. Decision Sciences Journal of Innovative Education, 4(2), 215–235. doi:10.1111/j.1540-4609.2006.00114.x Ertmer, P. A., & Stepich, D. A. (2004). Examining the relationship between higher-order learning and students’ perceived sense of community in an online learning environment. Proceedings of the 10th Australian World Wide Web conference, Gold Coast, Australia. Garrison, D. R. (2009). Communities of inquiry in online learning: Social, teaching and cognitive presence. In Howard, C. (Eds.), Encyclopedia of distance and online learning (2nd ed., pp. 352–355). Hershey, PA: IGI Global. Garrison, D. R., & Anderson, T. (2003). E-learning in the 21st century: A framework for research and practice. London, UK: Routledge/Falmer. doi:10.4324/9780203166093
32
Garrison, D. R., Anderson, T., & Archer, W. (2000). Critical inquiry in a text-based environment: Computer conferencing in higher education. The Internet and Higher Education, 2(2-3), 87–105. doi:10.1016/S1096-7516(00)00016-6 Garrison, D. R., & Arbaugh, J. B. (2007). Researching the community of inquiry framework: Review, issues, and future directions. The Internet and Higher Education, 10(3), 157–172. doi:10.1016/j.iheduc.2007.04.001 Garrison, D. R., & Cleveland-Innes, M. (2004). Critical factors in student satisfaction and success: Facilitating student role adjustment in online communities of inquiry. In J. Bourne & J. C. Moore (Eds), Elements of quality online education: Into the mainstream - volume 5 in the Sloan-C series (p. 29-38). Needham, MA: Sloan Center for Online Education. Garrison, D. R., Cleveland-Innes, M., & Fung, T. S. (2010). Exploring causal relations among teaching, cognitive and social presence: A holistic view of the community of inquiry framework. The Internet and Higher Education, 13(1-2), 31–36. doi:10.1016/j.iheduc.2009.10.002 Garrison, D. R., Cleveland-Innes, M., Koole, M., & Kappelman, J. (2006). Revisiting methodological issues in the analysis of transcripts: Negotiated coding and reliability. The Internet and Higher Education, 9(1), 1–8. doi:10.1016/j. iheduc.2005.11.001 Gunawardena, C. N., & Zittle, F. (1997). Social presence as a predictor of satisfaction within a computer mediated conferencing environment. American Journal of Distance Education, 11(3), 8–25. doi:10.1080/08923649709526970
Learning and Satisfaction in Online Communities of Inquiry
Harvey, D., Moller, L. A., Huett, J. B., Godshalk, V. M., & Downs, M. (2007). Identifying factors that affect learning community development and performance in asynchronous distance education. In Luppicini, R. (Ed.), Online learning communities (pp. 169–187). Charlotte, NC: Information Age Publishing. Hong, K.-S. (2002). Relationships between students’ and instructional variables with satisfaction and learning from a Web-based course. The Internet and Higher Education, 5(3), 267–281. doi:10.1016/S1096-7516(02)00105-7 Jung, I., Choi, S., Lim, C., & Leem, J. (2002). Effects of different types of interaction on learning achievement, satisfaction and participation in Web-based instruction. Innovations in Education and Teaching International, 39(2), 153–162. doi:10.1080/14703290252934603 Kanuka, H., Rourke, L., & Laflamme, E. (2007). The influence of instructional methods on the quality of online discussion. British Journal of Educational Technology, 38(2), 260–271. doi:10.1111/j.1467-8535.2006.00620.x Lin, S. Y., & Overbaugh, R. C. (2007). The effect of student choice of online discussion format on tiered achievement and student satisfaction. Journal of Research on Technology in Education, 39(4), 399–415. Liu, X., Magjuka, R. J., Bonk, C. J., & Lee, S.-H. (2007). Does sense of community matter? An examination of participants’ perceptions of building learning communities in online courses. Quarterly Review of Distance Education, 8(1), 9–24.
Martz, B., Reddy, V. K., & Sangermano, K. (2004). Looking for indicators of success for distance education. In Howard, C., Schenk, K., & Discenza, R. (Eds.), Distance learning and university effectiveness: Changing educational paradigms for online learning (pp. 144–160). Hershey, PA: Information Science Publishing. doi:10.4018/9781591401780.ch007 McKlin, T., Harmon, S. W., Evans, W., & Jone, M. G. (2002). Cognitive presence in Web-based learning: A content analysis of students’ online discussions. American Journal of Distance Education, 15(1), 7–23. Meyer, K. (2003). Face-to-face versus threaded discussions: The role of time and higher-order thinking. Journal of Asynchronous Learning Networks, 7(3), 55–65. Mingming, J., & Evelyn, T. (1999). A study of students’ perceived learning in a Web-based online environment. In Proceedings of WebNet 99 World Conference on WWW and Internet, Honolulu, Hawaii. Paechter, M., Maier, B., & Macher, D. (2010). Students’ expectations of, and experiences in e-learning: Their relation to learning achievements and course satisfaction. Computers & Education, 54(1), 222–229. doi:10.1016/j. compedu.2009.08.005 Palloff, R. M., & Pratt, K. (2005). Collaborating online: Learning together in community. San Francisco, CA: Jossey-Bass. Pawan, F., Paulus, T. M., Yalcin, S., & Chang, C. F. (2003). Online learning: Patterns of engagement and interaction among in-service teachers. Language Learning & Technology, 7(3), 119–140. Picciano, A. G. (2002). Beyond student perceptions: Issues of interaction, presence, and performance in an online course. Journal of Asynchronous Learning Networks, 6(1), 21–40.
33
Learning and Satisfaction in Online Communities of Inquiry
Richardson, J. C., & Swan, K. (2003). Examining social presence in online courses in relation to students’ perceived learning and satisfaction. Journal of Asynchronous Learning Networks, 7(1), 68–88. Rourke, L., & Anderson, T. (2002). Using peer teams to lead online discussion. Journal of Interactive Media in Education, 1. Rovai, A. P. (2002). Sense of community, perceived cognitive learning, and persistence in asynchronous learning networks. The Internet and Higher Education, 5(4), 319–332. doi:10.1016/ S1096-7516(02)00130-6 Sahin, I. (2007). Predicting student satisfaction in distance education and learning environments. Turkish Online Journal of Distance Education, 8(2), 113-119. Retrieved September 23, 2009 from http://tojde.anadolu.edu.tr/tojde26/pdf/ article_9.pdf Schreiner, L. A. (2009). Linking student satisfaction with retention. Retrieved January 19, 2010 from https://www.noellevitz.com/NR/rdonlyres/ A22786EF-65FF-4053-A15A-CBE145B0C708/ 0/LinkingStudentSatis0809.pdf Shea, P. (2006). A study of students’ sense of learning community in online environments. Journal of Asynchronous Learning Networks, 10(1), 35–44. Shea, P., & Bidjerano, T. (2009). Community of inquiry as a theoretical framework to foster epistemic engagement and cognitive presence in online education. Computers & Education, 52(3), 543–553. doi:10.1016/j.compedu.2008.10.007 Shea, P., Li, C. S., & Pickett, A. (2006). A study of teaching presence and student sense of learning community in fully online and Web-enhanced college courses. The Internet and Higher Education, 9(3), 175–190. doi:10.1016/j.iheduc.2006.06.005
34
Shea, P. J., Pickett, A. M., & Pelz, W. E. (2003). A flow-up investigation of teaching presence in the SUNY learning network. Journal of Asynchronous Learning Networks, 7(2), 61–80. Shea, P. J., Pickett, A. M., & Pelz, W. E. (2004). Enhancing student satisfaction through faculty development: The importance of teaching presence. In J. Bourne & J.C. Moore (Eds), Elements of quality online education: Into the mainstream - volume 5 in the Sloan-C series (pp. 39-59). Needham, MA: Sloan Center for Online Education. So, H.-J., & Brush, T. A. (2008). Student perceptions of collaborative learning, social presence and satisfaction in a blended learning environment: Relationships and critical factors. Computers & Education, 51(1), 318–336. doi:10.1016/j. compedu.2007.05.009 Sun, P.-C., Tsai, R. J., Finger, G., Chen, Y.-Y., & Yeh, D. (2008). What drives a successful elearning? An empirical investigation of the critical factors influencing learner satisfaction. Computers & Education, 50(4), 1183–1202. doi:10.1016/j. compedu.2006.11.007 Swan, K., & Shih, L. F. (2005). On the nature and development of social presence in online course discussions. Journal of Asynchronous Learning Networks, 9(3), 115–136. Tu, C. H., & McIsaac, M. (2002). The relationship of social presence and interaction in online classes. American Journal of Distance Education, 16(3), 131–150. doi:10.1207/S15389286AJDE1603_2 Vaughan, N., & Garrison, D. R. (2005). Creating cognitive presence in a blended faculty development community. The Internet and Higher Education, 8(1), 1–12. doi:10.1016/j.iheduc.2004.11.001
Learning and Satisfaction in Online Communities of Inquiry
KEY TERMS AND DEFINITIONS A Community of Inquiry: Where individual experiences and ideas are recognized and discussed in light of societal knowledge, norms, and values. Cognitive Presence: The extent to which learners are able to construct and confirm meaning through sustained reflection and discourse (Garrison, Anderson and Archer, 2001). Online Learning: A method of learning delivered by using asynchronous and synchronous communication technologies.
Perceived Learning: Self evaluation of the amount of learning that students gained. Satisfaction: An affective outcome indicating positive feelings and attitudes towards the quality of learning and learning environment. Social Presence: The ability of participants to identify with the community (e.g., course of study), communicate purposefully in a trusting environment, and develop inter-personal relationships by way of projecting their individual personalities. Teaching Presence: The design, facilitation, and direction of cognitive and social processes for the purpose of realizing personally meaningful and educationally worthwhile learning outcomes.
35
Section 2
Empirical Research Methods and Tutorial
37
Chapter 3
A Review of Research Methods in Online and Blended Business Education: 2000-2009
J. B. Arbaugh University of Wisconsin Oshkosh, USA Alvin Hwang Pace University, USA Birgit Leisen Pollack University of Wisconsin Oshkosh, USA
ABSTRACT This review of the online teaching and learning literature in business education found growing sophistication in analytical approaches over the last 10 years. The authors of this chapter believe researchers are uncovering important findings from the large number of predictors, control variables, and criterion variables examined. Scholars are employing appropriate and increasingly sophisticated techniques such as structural equation models in recent studies (16) within a field setting. To increase methodological rigor, researchers need to consciously incorporate control variables that are known to influence criterion variables of interest so as to clearly partial out the influence of their predictor variables of interest. This will help address shortcomings arising from the inability to convince sample respondents such as instructors, institutional administrators, and graduate business students on the benefits versus the cost of a fully randomized design approach. DOI: 10.4018/978-1-60960-615-2.ch003
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
A Review of Research Methods in Online and Blended Business Education
INTRODUCTION As blended and online teaching and learning become increasingly ubiquitous in business education, the pace of research examining these phenomena has accelerated dramatically during the last decade (Arbaugh, Godfrey, Johnson, Leisen Pollack, Niendorf, & Wresch, 2009). However, for the findings of this research to appropriately inform the practice of online business education, the studies should be methodologically rigorous. Recent reviews and studies of online teaching and learning research suggest that the general methodological quality of this research stream varies widely and often is lacking (Bernard, Abrami, Lou, & Borokhovski, 2004; Bernard, Abrami, Lou, Borokhovski, Wade, Wozney, Wallet, Fiset, & Huang, 2004; Means, Toyama, Murphy, Bakia, & Jones, 2009). Concerns of methodological quality led Bernard et al. (2009) to include a quality check in their recent meta-analysis of comparative studies of differing types of distributed education. Such reviews have under-reported studies of business education, in part because business education scholars often have not used experimental research designs with random assignment of subjects in designing their studies. However, because online business education scholars typically examine actual online courses where administrators and students control the composition of the research samples rather than the researchers, such designs often are not feasible. As the flexibility of the delivery medium is one of the prime attractions of online business courses for part-time MBA students whose priority is continuing in their jobs while getting their education (Arbaugh, 2005b; Dacko, 2001; Millson & Wilemon, 2008) it is unlikely that any institution would willingly randomly assign a student to an online or classroombased section for the convenience of researchers. Thus, researchers who typically do not have the option of subject randomization but who are able to identify important background influences that could affect their subjects may consciously
38
include such influences as covariates along with their independent variables of interest within their field research design. The use of such covariates is the only real practical design alternative in the field as most university administrators would reject a full randomized design, especially so with administrators from institutions where the majority of online instruction in business education is taught – masters comprehensive-level institutions (Alexander, Perrault, Zhao, & Waldman, 2009; Popovich & Neel, 2005), whose primary focus is on meeting education needs rather than research priorities. This field characteristic requires business education researchers to incorporate research designs that provide the advantages of randomized experiments as much as possible but without compromising student access or program/course offerings.
MAIN FOCUS OF THE CHAPTER The purpose of this chapter is to examine and assess the state of research methods used in studies of online and blended learning in the business disciplines with the intent of assessing the field, recommending adjustments to current research approaches and identifying opportunities for meaningful future research. We review research from the business disciplines of Accounting, Economics, Finance, Information Systems (IS), Management, Marketing, and Operations/Supply Chain Management over the first decade of the 21st century. It is our hope that the review will help those interested in evidence-based course design and delivery identify exemplary studies and help future scholars raise the overall quality of this research in the business disciplines. We also hope that online teaching and learning scholars in other disciplines might use our approach to conduct methods reviews in their respective fields, thereby helping to inform the broader research community of appropriate research design and conduct.
A Review of Research Methods in Online and Blended Business Education
Table 1. Terms used in the literature search Disciplines:
Search Terms for On-line:
Search Terms for Learning:
€€€€€• Management €€€€€• Finance €€€€€• Accounting €€€€€• Marketing €€€€€• Information Systems €€€€€• Operations/Supply Chain Management €€€€€• Economics
€€€€€• Blended €€€€€• Mediated €€€€€• Technology-mediated €€€€€• Distance €€€€€• On-line €€€€€• Virtual €€€€€• Web-based €€€€€• E €€€€€• Cyberspace €€€€€• Computer Based €€€€€• Computer Assisted €€€€€• Distributed
€€€€€• Education €€€€€• Learning €€€€€• Teaching €€€€€• Instruction
RESEARCH METHODS REVIEW PROTOCOL This paper is developed from a subset of a broader literature review of articles in business education that examined virtual learning environments including both fully and partially online course content and participant interaction. A comprehensive search for peer-reviewed articles pertaining to “on-line learning” in business courses that were published after January 1, 2000, was conducted between September 2006 and November 2009. Databases examined in the review included ABI/ Inform, Business Full Text, Business Source Elite, and Lexis/Nexis Business. Terminology used in the search is provided in Table 1. To supplement this review, the primary learning and education journals for each business discipline dating back to 2000, as identified in the journals database recently published in Academy of Management Learning & Education, were included in the review (Whetten, 2008). Given our interests in possible alternatives to exclusive dependence upon randomized experiments, we cast a wide net to include studies that examined virtual learning environments where the course content and participant interaction is conducted at least partially online. This protocol identified 120 articles that empirically examined online and/or blended learning in business and management education.
OBSERVATIONS ON RESEARCH METHODS AND ANALYTICAL APPROACHES Research Designs Used Two design types dominate our research sample: survey-based studies and quasi-experimental comparative designs. Sixty-four of the studies used a survey-based data collection approach. Forty-three studies used quasi-experimental designs, with nearly all of these studies comprising comparisons of online class sections or students with classroombased sections or students. Twenty-eight of the quasi-experimental studies used surveys to collect at least part of their data. Pure experimental designs were used in four studies, and interviews, archival studies, and multiple methods were the approaches used in three studies each.
Sample Sizes and Response Rates Table 2 shows the number of studies in various sample size categories for each year of the review. As the Table shows, the number of studies published annually increased sharply in 2002, and that increased activity continued throughout the decade. Many of the studies have at least moderately large sample sizes, with 74 studies having samples larger than 100. These larger samples also began to appear in earnest in 2002. This sug-
39
A Review of Research Methods in Online and Blended Business Education
Table 2. Number of articles with sample sizes by year of publication Year 2000
1-50
51-100
1
4
2002
4
1
2003
1
3
2004
4
2
2005
4
3
2006
1
2007 2008
2001
101-200
201-500
501-1000
1000+
Total 5
1
1
5
4
1
15
4
1
9
2
6
1
1
16
2
1
4
1
15
1
4
4
5
2
17
2
3
2
1
3
11
1
1
6
4
1
13
2009
1
3
3
3
1
Total
19
21
25
28
17
gests that securing appropriate statistical power became less of a concern as the decade progressed. Although the number of studies and the size of their samples increased throughout the decade, surprisingly there was a small but persistent stream of published studies with samples of less than 50. It should be noted, however, that some of these more recent studies are of faculty rather than students (i.e. Connally, Jones, & Jones, 2007; Liu, Bonk, Magjuka, Lee, & Su, 2005; Yukselturk & Top, 2005-2006). However, only seven studies mentioned attempts to control for non-response bias. Although we realize that many of the studies used entire class sections and therefore did not address non-response issues, the preponderance of survey-based studies in our sample suggests that non-response bias could be a concern.
Types of Variables Examined – Criterion, Predictor, and Control Variables The article’s criterion variables were classified into five broad categories: (1) course performance variables (e.g., exam score, GPA), (2) psychological variables (e.g., attitudes, perceptions, satisfaction), (3) course delivery variables (e.g., online, on-site), (4) student skills (e.g., comput-
40
2
11 4
ing skills), and (5) other criterion variables. The complete listing of the categories and measures is presented in Table 3. A prominently researched category of criterion variables pertain to course performance. These include course grade and exam scores. Here, much research has accumulated comparing traditional versus online course formats on one or more course performance measures. Echoing a common conclusion with other researchers, Anstine and Skidmore (2005) found no statistically significant difference in learning outcomes between the two formats in their direct comparison of exam scores for online versus traditional face-to-face course formats in an MBA level statistics course. Gratton-Lavoie and Stanley (2009) noted that students enrolled in an online version of an introductory economics course performed significantly higher on a final exam when compared to students enrolled in a hybrid course. No significant performance differences were found on two mid-term exams. The largest category of criterion variables is comprised of measures on psychological constructs. For example, Klein, Noe and Wang (2006) found students’ motivation to learn was significantly higher for hybrid than for face to face courses. Another psychological construct that has
A Review of Research Methods in Online and Blended Business Education
Table 3. Categories of criterion variables and measures Course Performance Variables
Psychological Variables
Course Delivery Variables
• academic performance • assignment/ exam/test/ quiz scores • actual learning • content knowledge • course completion • course evaluation questions • course points • disappearance from class • drop rate • gain score • entrance exam • exam questions • exercises • final score • learning outcomes • overall learning • participation grade • pass rate • postings • skill demonstration • skill development • student learning • student withdrawal • team performance
• attitude toward course • attitude toward discussion design format • attitude toward instructor • attitude toward learnerlearner interaction • attitude toward interaction • attitude towards medium • attitude towards PowerPoint/ Audio/Video • attitude toward system • attitude toward technology • attitude toward virtual teams • cognitive effort • cognitive learning • excitement • intention to use • learning style • motivation • satisfaction • satisfaction with course • satisfaction with delivery medium • perceived ease of use • perceived flexibility • perceived interaction • perceived learning • perceived quality of learning experience • perceived usefulness • perception of Blackboard • perception of digital library • perception of distance learning • perception of quality of online learning • perceptions of skills (cognitive, affective, interactive) • value of team processes • 6DVs & composite of 4DVs
• contact • course instrumentality • course structure • delivery medium • effective learning support • group behavior • group cohesiveness • group interactivity • instructor access • instructor performance • instructor support • interaction effectiveness • interaction outside class • interaction process • interaction with instructor • presentation of course material • sense of community • social interaction • social presence • student-to-student interaction • system usage • usage behavior • usage of BlackBoard • use of communication
Sample Studies: Anstine & Skidmore (2005) Gratton-Lavoie & Stanley (2009)
Sample Studies: Arbaugh & Rau (2007) Benbunan-Fich & Hiltz (2003) Klein, Noe, & Wang (2006) Martins & Kellermanns (2004) Webb, Gill, & Poe (2005)
Sample Studies: Berry (2002) Drago & Peltier (2004) Larson (2002)
Student Skill Variables • analytical skills • computing skills • leadership skills • management topics/ skills • playfulness with computers
Other Variables • assignment integrative complexity • ambiguity • clarity • effectiveness • future use of technology • medium as barrier/ enabler • programming • study time efficiency • theory • usefulness of experiential learning • program effectiveness
Sample Studies: Al-Shammari (2005) Chen & Jones (2007)
Note: Items in each category are listed in alphabetical order
41
A Review of Research Methods in Online and Blended Business Education
Table 4. Categories of predictor variables and measures Course Performance Variables
Psychological Variables
• academic achievement • academic level • course grade • GPA • online assessment • prior knowledge • prior learning • prior performance in class work • self-tests
• attitude toward computing, IT • attitude toward online instruction relative to classroom instruction • attitude toward subject • cognitive absorption • cognitive presence • cognitive style • focuses immersion • heightened enjoyment • individualism /collectivism • intention to use • involvement • learning approaches • learning goal orientation • learning style • motivation • perceived ease of use • perceived incentive to use • perceived interaction • perceived skill development • perceived knowledge • perceived usefulness of the degree • perceived usefulness of online environment • perceived technology effectiveness • perception of effectiveness of medium • perception of own learning style • personality • satisfaction with experience • self-efficacy • self-motivation • social pressure • temporal disassociation • Wonderlic personality
Course Delivery Variables
• audio summaries • availability of lecture notes • awareness of system capabilities • BlackBoard features • bulletin boards • chat summaries • classroom demeanor • classroom dynamics • class section size • coach involvement • consistency of participation • content • course activity mode • course design characteristics • course flexibility • course format • content items • course participation • course site usage • course structure • course topic areas • delivery medium • design of environment for interaction • direct instruction • ease of use • ease of interaction • emphasis on interaction • facilitation effectiveness • facilitating discourse • faculty encouragement to use • flexibility • functionality • group activity • group cohesiveness • group projects • group size • hit consistency • in-class feedback seeking • individual projects • instructional design & organization • instructor • instructor facilitation • instructor feedback • instructor feedback timeliness • instructor feedback quality • instructor immediacy • instructor online experience • instructor presence
• instructor role • intensity of participation • interaction • interaction difficulty • interface • learner-learner interaction • learner-instructor interaction • learner-system interaction • media variety • name recognition • onsite meeting • out-of-class feedback seeking • participant behavior • participation in discussion boards • peer feedback • peer encouragement • positive & negative feedback behavior • program flexibility • prior online course experience • provided flexibility • provided helpfulness • quality of learning technology • quality of materials • site hits • social presence • social prompting • software platform • student engagement • support features • system usage • usefulness of lecture notes • use of CMS content • use of computerassisted • virtual team experiences • student dyads • teaching approach • teaching presence • team composition • team conference participation • teamwork orientation • technology tools • telepresence • time spent on chat • threaded discussion • total hits on site
Student Skill Variables • computer literacy • experience level • preference for verbal versus written • student ability • students’ technical skills
Demographics & Other Variables • academic discipline • age • children at home • commute time to university • convenience • empathy • full-time versus part-time student • gender • importance of program features • industry employed in • level of effort • location • marital status • nationality, international student • parents’ education • semester • reasons for choosing online • reasons for pursuing the degree • times participated in study • weekly work hours
continued on following page 42
A Review of Research Methods in Online and Blended Business Education
Table 4. Continued Course Performance Variables
Psychological Variables
Course Delivery Variables
Sample Studies: Hwang & Arbaugh (2009)
Sample Studies: Martins & Kellermanns (2004)
Sample Studies: Eom, Wen, & Ashill (2006) Hansen (2008) Peltier, Schibrowsky, & Drago (2007) Webb, Gill, & Poe (2005)
Student Skill Variables
Demographics & Other Variables
Sample Studies: GrattonLavoie & Stanley (2009) Medlin, Vannoy, & Dave (2004)
Sample Studies: Gratton-Lavoie & Stanley (2009) Grzeda & Miller (2009)
Note: Items in each category are listed in alphabetical order
been studied is student satisfaction. Arbaugh and Rau (2007) investigated the effects of a variety of course structure and participant behavior variables on student satisfaction with the delivery medium. They concluded that structure variables such as media variety and group projects significantly and positively affected delivery medium satisfaction. They also found that learner-learner interaction was negatively related to satisfaction. A third category of psychological variables involve students’ perceptions of course elements. For instance, perceived learning was investigated by Benbunan-Fich and Hiltz (2003). They found no significant differences in perceived learning between online, face-to-face, and hybrid course formats. Webb, Gill and Poe (2005) found that perceptions of interaction vary significantly by delivery medium with perceived increase in interactions for courses with more online discussion components. Martins and Kellermanns (2004) found that students perceive a web-based course management system to be more useful if they were presented with incentives to use the system and if they received faculty and peer encouragement to use the system. Also, students perceived such a system to be easier to use if they were aware of the capabilities of the system, have technical support, and prior experience with computer and the Web. Course delivery variables represented the third category of criterion variables. These
included measures of course structure, delivery medium, and a host of measures that assessed components of interaction processes. Larson (2002) found that instructor involvement had a positive effect on interactivity in an online strategic marketing course. He also found that interaction quantity was negatively related to interaction quality. Berry (2002) found that face-to-face versus virtual team interactions were equally effective in producing group cohesiveness, satisfactory group interaction process, and satisfactory group outcomes. Student skill variables represented the fourth main category of criterion variables. For example, along with perceptions of course effectiveness, perceived learning, and course satisfaction, Chen and Jones (2007) also tested for differences in reported analytical and computer skills between a classroom-based and a blended learning section of an MBA-level Accounting course. They found that students in the blended learning section reported greater improvement in analytical and computer skills. Predictor variables were classified into five broad categories: (1) course performance variables (e.g., GPA), (2) psychological variables (e.g., attitudes, perceptions, satisfaction), (3) course delivery variables (e.g., online, on-site), (4) student skills (e.g., computing skills), and (5) demographic and other variables. The complete listing of the categories and measures is presented in Table 4.
43
A Review of Research Methods in Online and Blended Business Education
Course performance variables were largely previous academic performance measures of respondents, such as prior knowledge, intelligence, and/or GPA (Chueng & Kan, 2002; Kim & Schniederjans, 2004; Murphy & Tyler, 2005; Potter & Johnston, 2006; Weber & Lennon, 2007). The psychological variables category is quite varied, including student attitudes and numerous measures of student perceptions. These included perception measures of educational technology (Arbaugh, 2005b; Davis & Wong, 2007; Johnson, Hornik, & Salas, 2008; Landry, Griffeth & Hartman, 2006), learner motivation (Eom, Wen, & Ashill, 2006; Klein et al., 2006; Saade, 2007), participant interaction, amongst others (Arbaugh & BenbunanFich, 2006; 2007; Peltier, Schibrowsky, & Drago, 2007). These attitudes and perceptions do influence criterion variables. For example, Martins and Kellermanns (2004), found students’ attitudes toward the web-based system to positively affect their intention to use the system. Another important predictor category is course delivery. In fact, course delivery characteristics are the most commonly studied predictor variables in our review sample. Here, a significant amount of research has been accumulated with many studies comparing traditional versus online course format on a variety of factors. Most prominently, the two formats have been evaluated for differences in student performance. For example, Webb, Gill and Poe (2005) found that students performed better on multiple learning outcomes in online/ partially online course formats. Hansen (2008) found that online class format led to better performance on a business plan assignment when compared to the traditional class room format. However, he found no significant differences in test scores for the two formats. A less common approach is to compare different technologies for delivering education online. For example, Alavi, Marakas, and Yoo (2002) found that cohorts of adult learners reported higher learning gains using an e-mail/listserv-based system than those using a test version of a “next generation” integrated course management system. Peltier, Schibrowsky 44
and Drago (2007) found student perceptions of the quality of their online experience were significantly influenced by the course structure and course content. Course content perceptions were in turn significantly influenced by various interaction variables and lecture delivery quality. In their investigation of factors affecting satisfaction with an online course, Eom, Wen, and Ashill (2006) found that course structure, interaction, instructor feedback, and instructor facilitation to be important influencing course delivery variables. Demographic characteristics of students also have been studied as predictor variables. For example, Gratton-Lavoie and Stanley (2009) noted significantly differences in age, gender, marital status and number of children of respondents who enrolled in online versus hybrid classroom formats of an introductory economics course with the online course format being selected by significantly more older students, students with children, and those from the female gender. It is likely that the most significant finding from this review regarding control variables is their relative lack of use. At least half of the studies reviewed included no control variables whatsoever. The most commonly used control variables were type of student and gender (31 studies each), intelligence and/or prior academic performance (25 studies), prior experience with online learning or technology (18 studies), course design characteristics (17 studies), system usage (8 studies), and work position/experience (8 studies). Although there were some studies that had some rigorous controls for a variety of conditions (Anstine & Skidmore, 2005; Arbaugh, 2005a, 2005b; Arbaugh & Benbunan-Fich, 2007; Klein et al., 2006; Webb et al., 2005 are examples of such studies) with some even using important demographic and other variables as primary predictors of interest, the collective body of work suggests that many of the findings have not ruled out influences from common control variables and consequent alternative explanations that have not been accounted for.
A Review of Research Methods in Online and Blended Business Education
Table 5. Number of articles with statistical techniques by year of publication Year
Descriptive Statistics only
2000
Chi-square, t-tests, ANOVA
1
2001
Multi-variate Techniques (MANOVA, regression, EFA, etc.)
SEM, CFA
HLM
4 2
2002
2
5
7
2003
1
1
5
1
2004
9
6
3
1
2005
4
3
5
2
2006
3
2
4
6
2
2007
5
1
4
3
1
5
7
1
5
2
46
16
2008 2009
5
Total
29
24
Statistical Techniques Table 5 shows a breakout of the articles by statistical techniques used. Several interesting observations emerge from this table. First, the use of multivariate techniques such as multiple regression analysis has been a staple of the research stream. The use of this highly reliable testing technique early in the research area may be a little surprising to some. But in hindsight, this may reflect the fact that the field has attracted researchers who have been trained in such analytical techniques which are common in many business disciplines, and those scholars simply brought these techniques with them to design and analyze studies of online teaching and learning, Nevertheless, despite the sophistication of some researchers in this area, nearly one-fourth of the published studies reported only descriptive statistics, and this number did not decline over the decade. Even though some research questions are best addressed by descriptive statistics, there is room to debate the pervasiveness of analytical rigor across the field when there is still a sizeable proportion of studies that only employed basic descriptive statistics. On a positive note, there is a growing
3
number of studies, although still small, that has used highly sophisticated statistical techniques such as structural equation models and hierarchal linear models in recent years.
SUMMARY ASSESSMENT OF THE METHODOLOGICAL STATE OF THE FIELD Overall, it appears that there is a robust variety of characteristics being examined in studies of online business education, and the methodological and analytical rigor with which the studies are being conducted continues to improve. Sample sizes are increasing, and although there may still be a larger than expected proportion of studies that exclusively relied on basic descriptive statistics, more analytically rigorous techniques have increasing presence. What is particularly encouraging from the review is that it is evident that the peer review process is allowing the cream to rise to the top. The particularly methodologically and analytically rigorous studies are being found in journals such as American Economic Review (Brown & Liedholm, 2002), Information Systems Research
45
A Review of Research Methods in Online and Blended Business Education
(Alavi et al., 2002; Santhanam, Sadisharan, & Webster, 2008), Journal of Economic Education (Anstine & Skidmore, 2005; Gratton-Lavoie & Stanley, 2009), Personnel Psychology (Klein et al., 2006), Information & Management (BenbunanFich & Arbaugh, 2006; Lu, Yu, & Liu, 2003; Saade & Bahli, 2005) and Academy of Management Learning & Education (Arbaugh, 2005a; 2005b; Arbaugh & Benbunan-Fich, 2006; Martins & Kellermanns, 2004). In short, excellent work in this area is published in highly regarded outlets.
FUTURE RESEARCH DIRECTIONS Our review shows a field that is increasing in sophistication. This is seen in the sampling breadth, research design and analytical approaches that were described here. For example, out of the 120 studies, sample sizes have ranged from a low of 27 students in 2000 to a high of 1,244 students in 2004 (Drago & Peltier, 2004), with a sample mean of 261 students. Samples sizes in the hundreds have become increasingly common since 2005. The samples were fairly well spread across disciplines with 28 studies indicating multi-disciplinary samples and the rest from accounting, economics, finance, information systems, management, marketing, and other business majors. Thus, in addition to growing sample sizes, respondents have come from all the major business disciplines. This helps to increase generalizability of findings in the field. Also, the range of predictors have included a fairly wide set of demographic, attitudinal and other respondent characteristics and recent criterion have focused on student learning performance such as tests and exam results (Hwang & Arbaugh, 2009; Johnson et al., 2008). Despite the growing depth in studies over the last 10 years, the field could benefit from further methodological and analytical rigor as described below.
46
More Faculty-Based Samples First, research on how faculty and administrators may facilitate online versus face-to-face learning performance is still lacking. We could only find seven studies in the online teaching and learning area that included responses from faculty in recent years, with only two faculty studies having samples of over 100 (Connally et al., 2007; Gibson, Harris, & Colaric, 2008; Liu et al., 2005; Popovich & Neel, 2005). These studies with faculty responses have focused on their acceptance of online technology and their perceptions (Gibson et al., 2008) rather than their roles in directly facilitating learning performance of students. Future studies could explore and test the role of faculty and administrators in facilitating online learning success.
Greater Use of HLM Techniques (Or At Least Test to See Whether Their Use Is Appropriate) In terms of research design, the most common research design was cross-sectional surveys of student perceptions and attitudes – either alone or together with some student records, such as class grades and/or online participation for analyses. There were few studies that used pre-post survey designs. Only 22 studies solely relied on student or exam grades plus some form of student records (e.g., types of classes, previous grades, etc.) without collecting additional student survey data. The most common statistical analyses used in the studies was either multiple regression analysis procedures to examine relationships between predictors and criterion variables of interests and/ or analysis of variance procedures to compare differences in mean values amongst sub-samples. Seventy of the 120 studies used either multiple regressions or analysis of variance procedures. Sixteen studies used structural equation models in multi-stage model testing or to determine significance of factor structures. There were some studies that used exploratory factor analysis procedures
A Review of Research Methods in Online and Blended Business Education
to develop factors from survey items. Also, some were descriptive papers that required no statistical analytical procedures with a few studies presenting frequency distributions or correlation matrices without further statistical analyses. Clearly, the use of some form of multiple regression or analysis of variance procedures is common across the reviewed studies. Less common is the use of structural equation models where factors could be tested and multi-stage models built from the data to examine fit. An analytical approach that is clearly missing from studies is the use of hierarchical linear modeling (HLM). While such an analytical approach is increasingly common in disciplinebased business research (Hitt, Beamish, Jackson, & Mathieu, 2007; Morgeson & Hofmann, 1999), it has yet to be used in online business education research. Given the number of multi-course studies in our review (45 had five or more class sections), there is the potential for unexplained nesting effects. Students in such studies are nested within courses, and there may be course-specific effects that may not be accounted for when limited to traditional multivariate approaches. It would be understandable if HLM techniques were not being used if intra-class correlation coefficients (ICC) were too low to warrant their use. However, we found only one study that even checked the ICCs of their classes (Alavi et al., 2002). Therefore, we encourage future researchers to examine possible nesting effects, and at minimum to calculate the ICC in their multi-class section studies (Bickel, 2007). This will help researchers determine whether the use of HLM techniques is warranted.
Further Identification and Use of Relevant Control Variables In addition to possible nesting effects for examination, Anstine and Skidmore (2005) raised an important concern – the need to include control variables that will account for background effects (e.g., demographic factors, class design factors, etc.) which may not be of primary interest to a
study but nevertheless could impact on learning outcomes. There were 62 studies that did not report any statistical control variables. Researchers should bear in mind this caution for control variables in their research design. The most common control variables in studies were of a demographic nature, such as age, and gender. Class structure variables such as prior instructor online course experience, class section size, types of assignments, use of media, etc. were also used as control variables in some studies. We expect future studies to pay increasing attention to the need for control variables as this will improve comparability of findings in the field. Besides variables measuring student age, gender, and prior knowledge, some researchers believe controls for prior learner online course experience, effort expended, and the likelihood that learners would use online as a substitute for classroom learning are particularly warranted (Anstine & Skidmore, 2005; Arbaugh, 2005a; Brown & Liedholm, 2002). These may prove to be useful control variables as researchers move toward predictor variables that are more abstract in nature but which could help explain student effort and primary motivation in online learning environment success. For example recent studies have examined the role of competitive attitudes (Hwang & Arbaugh, 2009) and cultural variables, such as individualism-collectivism (Johnson et al., 2008) in motivating online learning participation and success. By controlling for factors such as prior online experience and individual preference for online environment, researchers could uncover additional influences of more abstract factors in online learning success.
Greater Consideration of Randomized Design, or Reasonable Alternatives Another research design consideration is related to the lack of traditional randomized experimental design in current studies. About one-third of the studies were comparison studies of classroom,
47
A Review of Research Methods in Online and Blended Business Education
online, and/or blended learning settings. However, all these studies were quasi-experimental in research design because students were not randomly assigned to the delivery medium in their classes. This is an area of concern that has been raised by some educational researchers (Bernard et al., 2004; Means et al., 2009). Although fully randomized experimental design is believed to be ideal for comparing the learning efficacy of traditional face-to-face classes versus online learning classes, there are practical issues that researchers have to consider. There are also possible work-around options to address some of the concerns in not having fully randomized experimental designs in online learning research. On the practical side, it is no secret that the flexibility of the online delivery medium is a prime attraction to students who participate in online business courses, particularly at the MBA level (Arbaugh, 2005b; Dacko, 2001; Millson & Wilemon, 2008). The nature of part-time MBA programs with their focus on full-time working professionals who usually have clear preferences on timing of classes and delivery medium that will suit their working needs, makes it unlikely for these students to volunteer themselves to be randomly assigned to a face-to-face versus an online learning environment at the request of researchers. Also, course administrators who have student preferences and needs as primary considerations are unlikely to allow researchers free hand in randomized design in the field. What other options do researchers have in addressing the argument for randomized experimental design in an online learning environment? We have to go back to the nature and assumptions of randomized experimental design for some insight. The strength and popularity of randomized experimental design rests on the foundation of internal validity – changes in outcomes could be traced to treatment effects while taking into consideration other possible influences in the design background through the subject randomization process. Although experimental researchers could
48
control their subjects in laboratories for randomization and treatment assignment purposes, field researchers often do not have this luxury as can be seen in the reasons mentioned above. The way to address potential background effects that may affect subject treatment is to identify such background effects in the initial design and consciously capture them as covariates along with predictor variables of interest in the research design, which further underscores our call for more intentional incorporation of control variables into studies in this field. By consciously designing and capturing both types of variables in the design, researchers could examine the comparative impact of both predictors and covariates of interest and so account for background effects. For example, in examining the impact of online learning usage on learning outcomes, researchers have often included covariates such as prior experience with technology, participant demographics (gender, age, intelligence etc.), and other variables that were important but which were not the primary interests of studies (Arbaugh, 2005a; Anstine & Skidmore, 2005; Gratton-Lavoie & Stanley, 2009; Liu, Magjuka, & Lee, 2006). Although no design could fully account for all the background effects, the use of covariates should account for the major known background effects. Both experimental design and field study design play different roles in uncovering knowledge across many disciplines. Even experimental researchers have acknowledged these roles (Mook, 1983) and certainly we should not ignore the usefulness of using covariates where complete randomization of subjects is not possible. If researchers only restrict themselves to the randomized experimental design approach, then the process of uncovering the impact of face-toface versus online learning is likely to be much slower as researchers will have to depend on the good fortune of having persuaded students to participate in randomization studies. Instead of doing a service to the discovery of how different learning environments may impact students, the
A Review of Research Methods in Online and Blended Business Education
sole use of fully randomized experimental approach would on the contrary greatly diminish the number of studies, sample size, and consequent generalizability of findings in this area (Arbaugh & Rau, 2007).
Greater Consideration of Varieties of Blended Learning Models Another issue for consideration is the growing pervasiveness of online and blended learning delivery mediums (Allen, Seaman, & Garrett, 2007; Clark & Mayer, 2008). While experimental design studies may help identify differences in impact between traditional face-to-face versus online learning environment, the growing acceptance of blended learning structures makes it difficult to clearly trace face-to-face versus online learning effects from an experimental design approach. This is so because students no longer participate solely in either face-to-face or online learning environments. They are involved in both types of environments in the same course. Instead of thinking in terms of “either or” student assignment to the face-to-face versus online learning environment, such blended environments point to the usefulness of considering influences from both types of learning environments. This analysis will be achieved not through randomized experimental design but instead through conscious capture of both types of influences and testing their comparative impact on learning outcomes. An example is the Hwang and Arbaugh (2009) study where both reported face-to-face feedback interactions were compared against discussion board participation frequency for their comparative impact on test results. Thus, the question of influence from either the face-to-face learning environment or online learning may be superseded by the question of comparative impact of both environments due to the growth of the blended learning medium.
CONCLUSION This review has shown a deepening of research in the online teaching and learning area with numerous studies testing for the impact of online learning approaches and student attitudes on learning success. While we do know that some online learning approaches have contributed to learning performance, there is still more to be done. First, we need to more consistently include control variables to account for common background influences, such as demographic variables that could impact online learning performance. This is especially important when the only research design option in some environment is a quasiexperimental approach, if possible at all. Including control variables will help alleviate concerns about background effects that could best be accounted for in true experimental design. Also, researchers should start looking at nesting effects of classes, or whole courses across different sections of student responses – the arena of hierarchical linear models. Researchers should also continue their efforts in developing more sophisticated multi-stage models that could test for comparative influences of many variables within a sample through structural equation models. This will move researchers away from traditional two-stage multiple regression models. Although multiple regression models are still important, these could be subsumed in models that involve more than two stages. Despite the still somewhat limited number of empirical studies in this area, there is a beginning set of results that could be used in exploring best practices in designing online learning environments. These may include types of students that could benefit from such environments, the role of attitudes in motivating student success and the importance of participation on a regular basis to succeed in such learning environment. There is more to be done, but we are beginning to understand of how online teaching may be designed for student learning success.
49
A Review of Research Methods in Online and Blended Business Education
REFERENCES Al-Shammari, M. (2005). Assessing the learning experience in a business process reengineering (BPR) course at the University of Bahrain. Business Process Management Journal, 11(1), 47–62. doi:10.1108/14637150510578728 Alavi, M., Marakas, G. M., & Yoo, Y. (2002). A comparative study of distributed learning environments on learning outcomes. Information Systems Research, 13, 404–415. doi:10.1287/ isre.13.4.404.72 Alexander, M. W., Perrault, H., Zhao, J. J., & Waldman, L. (2009). Comparing AACSB faculty and student online learning experiences: Changes between 2000 and 2006. Journal of Educators Online, 6(1). Retrieved February 1, 2009, from http://www.thejeo.com/Archives/ Volume6Number1/Alexanderetalpaper.pdf Allen, I. E., Seaman, J., & Garrett, R. (2007). Blending in: The extent and promise of blended earning in the United States. Needham, MA: Sloan-C. Anstine, J., & Skidmore, M. (2005). A small sample study of traditional and online courses with sample selection adjustment. The Journal of Economic Education, 36, 107–127. Arbaugh, J. B. (2005a). How much does subject matter matter? A study of disciplinary effects in Web-based MBA courses. Academy of Management Learning & Education, 4, 57–73. Arbaugh, J. B. (2005b). Is there an optimal design for on-line MBA courses? Academy of Management Learning & Education, 4, 135–149. Arbaugh, J. B., & Benbunan-Fich, R. (2006). An investigation of epistemological and social dimensions of teaching in online learning environments. Academy of Management Learning & Education, 5, 435–447.
50
Arbaugh, J. B., & Benbunan-Fich, R. (2007). Examining the influence of participant interaction modes in Web-based learning environments. Decision Support Systems, 43, 853–865. doi:10.1016/j. dss.2006.12.013 Arbaugh, J. B., Godfrey, M. R., Johnson, M., Leisen Pollack, B., Niendorf, B., & Wresch, W. (2009). Research in online and blended learning in the business disciplines: Key findings and possible future directions. The Internet and Higher Education, 12(2), 71–87. doi:10.1016/j. iheduc.2009.06.006 Arbaugh, J. B., & Rau, B. L. (2007). A study of disciplinary, structural, and behavioral effects on course outcomes in online MBA courses. Decision Sciences Journal of Innovative Education, 5, 65–95. doi:10.1111/j.1540-4609.2007.00128.x Benbunan-Fich, R., & Arbaugh, J. B. (2006). Separating the effects of knowledge construction and group collaboration in Web-based courses. Information & Management, 43, 778–793. doi:10.1016/j.im.2005.09.001 Benbunan-Fich, R., & Hiltz, S. R. (2003). Mediators of the effectiveness of online courses. IEEE Transactions on Professional Communication, 46(4), 298–312. doi:10.1109/TPC.2003.819639 Bernard, R. M., Abrami, P. C., Borokhovski, E., Wade, E., Tamim, R. M., Surkes, M. A., & Bethel, E. C. (2009). A meta-analysis of three types of interaction treatments in distance education. Review of Educational Research, 79, 1243–1289. doi:10.3102/0034654309333844 Bernard, R. M., Abrami, P. C., Lou, Y., & Borokhovski, E. (2004). A methodological morass? How can we improve quantitative research in distance education. Distance Education, 25(2), 175–198. doi:10.1080/0158791042000262094
A Review of Research Methods in Online and Blended Business Education
Bernard, R. M., Abrami, P. C., Lou, Y., Borokhovski, E., Wade, A., & Wozney, L. (2004). How does distance education compare with classroom instruction? A meta-analysis of the empirical literature. Review of Educational Research, 74, 379–439. doi:10.3102/00346543074003379 Berry, R. W. (2002). The efficacy of electronic communication in the business school: Marketing students’ perceptions of virtual teams. Marketing Education Review, 12(2), 73–78. Bickel, R. (2007). Multilevel analysis for applied research: It’s just regression!New York, NY: Guilford Press. Brown, B. W., & Liedholm, C. E. (2002). Can Web courses replace the classroom in principles of microeconomics? The American Economic Review, 92(2), 444–448. doi:10.1257/000282802320191778 Chen, C. C., & Jones, K. T. (2007). Blended learning vs. traditional classroom settings: Assessing effectiveness and student perceptions in an MBA accounting course. Journal of Educators Online, 4(1), 1–15. Cheung, L. L. W., & Kan, A. C. N. (2002). Evaluation of factors related to student performance in a distance-learning business communication course. Journal of Education for Business, 77, 257–263. doi:10.1080/08832320209599674 Clark, R. C., & Mayer, R. E. (2008). E-learning and the science of instruction: Proven guidelines for consumers and designers of multimedia learning (2nd ed.). San Francisco, CA: Pfeiffer. Connolly, M., Jones, C., & Jones, N. (2007). New approaches, new vision: Capturing teacher experiences in a brave new online world. Open Learning, 22(1), 43–56. doi:10.1080/02680510601100150
Dacko, S. G. (2001). Narrowing skill development gaps in marketing and MBA programs: The role of innovative technologies for distance learning. Journal of Marketing Education, 23, 228–239. doi:10.1177/0273475301233008 Davis, R., & Wong, D. (2007). Conceptualizing and measuring the optimal experience of the e-learning environment. Decision Sciences Journal of Innovative Education, 5, 97–126. doi:10.1111/j.1540-4609.2007.00129.x Drago, W., & Peltier, J. (2004). The effects of class size on the effectiveness of online courses. Management Research News, 27(10), 27–41. doi:10.1108/01409170410784310 Eom, S. B., Wen, H. J., & Ashill, N. (2006). The determinants of students’ perceived learning outcomes and satisfaction in university online education: An empirical investigation. Decision Sciences Journal of Innovative Education, 4, 215–235. doi:10.1111/j.1540-4609.2006.00114.x Gibson, S. G., Harris, M. L., & Colaric, S. M. (2008). Technology acceptance in an academic context: Faculty acceptance of online education. Journal of Education for Business, 83, 355–359. doi:10.3200/JOEB.83.6.355-359 Gratton-Lavoie, C., & Stanley, D. (2009). Teaching and learning of principles of microeconomics online: An empirical assessment. The Journal of Economic Education, 40(2), 3–25. doi:10.3200/ JECE.40.1.003-025 Grzeda, M., & Miller, G. E. (2009). The effectiveness of an online MBA program in meeting midcareer student expectations. Journal of Educators Online, 6(2). Retrieved November 10, 2009, from http://www.thejeo.com/Archives/Volume6Number2/GrzedaandMillerPaper.pdf Hansen, D. E. (2008). Knowledge transfer in online learning environments. Journal of Marketing Education, 30, 93–105. doi:10.1177/0273475308317702
51
A Review of Research Methods in Online and Blended Business Education
Hitt, M. A., Beamish, P. W., Jackson, S. E., & Mathieu, J. E. (2007). Research in theoretical and empirical bridges across levels: Multilevel research in management. Academy of Management Journal, 50, 1385–1399. Hwang, A., & Arbaugh, J. B. (2009). Seeking feedback in blended learning: Competitive versus cooperative student attitudes and their links to learning outcome. Journal of Computer Assisted Learning, 25, 280–293. doi:10.1111/j.13652729.2009.00311.x Johnson, R. D., Hornik, S., & Salas, E. (2008). An empirical examination of factors contributing to the creation of successful e-learning environments. International Journal of Human-Computer Studies, 66, 356–369. doi:10.1016/j.ijhcs.2007.11.003 Kim, E. B., & Schniederjans, M. J. (2004). The role of personality in Web-based distance education courses. Communications of the ACM, 47(3), 95–98. doi:10.1145/971617.971622 Klein, H. J., Noe, R. A., & Wang, C. (2006). Motivation to learn and course outcomes: The impact of delivery mode, learning goal orientation, and perceived barriers and enablers. Personnel Psychology, 59, 665–702. doi:10.1111/j.17446570.2006.00050.x Landry, B. J. L., Griffeth, R., & Hartman, S. (2006). Measuring student perceptions of blackboard using the technology acceptance model. Decision Sciences Journal of Innovative Education, 4, 87–99. doi:10.1111/j.1540-4609.2006.00103.x Larson, P. D. (2002). Interactivity in an electronically delivered marketing course. Journal of Education for Business, 77, 265–245. doi:10.1080/08832320209599675 Liu, X., Bonk, C. J., Magjuka, R. J., Lee, S., & Su, B. (2005). Exploring four dimensions of online instructor roles: A program level case study. Journal of Asynchronous Learning Networks, 9(4), 29–48.
52
Liu, X., Magjuka, R. J., & Lee, S. (2006). An empirical examination of sense of community and its effects on students’ satisfaction, perceived learning outcome, and learning engagement in online MBA courses. International Journal of Instructional Technology & Distance Learning, 3(7). Retrieved September 15, 2006, from http:// www.itdl.org/Journal/Jul_06/article01.htm Lu, J., Yu, C.-S., & Liu, C. (2003). Learning style, learning patterns, and learning performance in a WebCT-based MIS course. Information & Management, 40, 497–507. doi:10.1016/S03787206(02)00064-2 mMartins, L. L., & Kellermans, F. W. (2004). A model of business school students’ acceptance of a Web-based course management system. Academy of Management Learning & Education, 3, 7–26. Means, B., Toyama, Y., Murphy, R., Bakia, M., & Jones, K. (2009). Evaluation of evidence-based practices in online learning: A meta-analysis and review of online learning studies. Washington, DC: U.S. Department of Education. Millson, M. R., & Wilemon, D. (2008). Educational quality correlates of online graduate management education. Journal of Distance Education, 22(3), 1–18. Mook, D. G. (1983). In defense of external invalidity. The American Psychologist, 38(4), 379–387. doi:10.1037/0003-066X.38.4.379 Morgeson, F. P., & Hofmann, D. A. (1999). The structure and function of collective constructs: Implications for multilevel research and theory development. Academy of Management Review, 24, 249–265. doi:10.2307/259081 Murphy, S. M., & Tyler, S. (2005). The relationship between learning approaches to part-time study of management courses and transfer of learning to the workplace. Educational Psychology, 25, 455–469. doi:10.1080/01443410500045517
A Review of Research Methods in Online and Blended Business Education
Peltier, J. W., Schibrowsky, J. A., & Drago, W. (2007). The interdependence of the factors influencing the perceived quality of the online learning experience: A causal model. Journal of Marketing Education, 29, 140–153. doi:10.1177/0273475307302016 Popovich, C. J., & Neel, R. E. (2005). Characteristics of distance education programs at accredited business schools. American Journal of Distance Education, 19, 229–240. doi:10.1207/ s15389286ajde1904_4 Potter, B. N., & Johnston, C. G. (2006). The effect of interactive online learning systems on student learning outcomes in accounting. Journal of Accounting Education, 24, 16–34. doi:10.1016/j. jaccedu.2006.04.003 Saade, R. G. (2007). Dimensions of perceived usefulness: Toward enhanced assessment. Decision Sciences Journal of Innovative Education, 5, 289–310. doi:10.1111/j.1540-4609.2007.00142.x Santhanam, R., Sasidharan, S., & Webster, J. (2008). Using self-regulatory learning to enhance e-learning-based information technology training. Information Systems Research, 19, 26–47. doi:10.1287/isre.1070.0141 Webb, H. W., Gill, G., & Poe, G. (2005). Teaching with the case method online: Pure versus hybrid approaches. Decision Sciences Journal of Innovative Education, 3, 223–250. doi:10.1111/j.15404609.2005.00068.x Weber, J. M., & Lennon, R. (2007). Multi-course comparison of traditional versus Web-based course delivery systems. Journal of Educators Online, 4(2), 1–19. Whetten, D. A. (2008). Introducing AMLEs educational research databases. Academy of Management Learning & Education, 7, 139–143.
Yukselturk, E., & Top, E. (2005-2006). Reconsidering online course discussions: A case study. Journal of Educational Technology Systems, 34(3), 341–367. doi:10.2190/6GQ8-P7TXVGMR-4NR4
ADDITIONAL READING Arbaugh, J. B. (2000a). Virtual classroom characteristics and student satisfaction in Internet-based MBA courses. Journal of Management Education, 24, 32–54. doi:10.1177/105256290002400104 Arbaugh, J. B. (2000b). Virtual classrooms versus physical classrooms: An exploratory study of class discussion patterns and student learning in an asynchronous Internet-based MBA course. Journal of Management Education, 24, 213–233. doi:10.1177/105256290002400206 Arbaugh, J. B. (2000c). How classroom environment and student engagement affect learning in Internet-based MBA courses. Business Communication Quarterly, 63(4), 9–26. doi:10.1177/108056990006300402 Arbaugh, J. B. (2000d). An exploratory study of the effects of gender on student learning and class participation in an Internet-based MBA course. Management Learning, 31, 533–549. doi:10.1177/1350507600314006 Arbaugh, J. B. (2001). How instructor immediacy behaviors affect student satisfaction and learning in web-based courses. Business Communication Quarterly, 64(4), 42–54. doi:10.1177/108056990106400405 Arbaugh, J. B., & Duray, R. (2002). Technological and structural characteristics, student learning and satisfaction with web-based courses: An exploratory study of two MBA programs. Management Learning, 33, 231–247. doi:10.1177/1350507602333003
53
A Review of Research Methods in Online and Blended Business Education
Arbaugh, J. B., & Hwang, A. (2006). Does “teaching presence” exist in online MBA courses? The Internet and Higher Education, 9, 9–21. doi:10.1016/j.iheduc.2005.12.001 Baugher, D., Varanelli, A., & Weisbord, E. (2003). Student hits in an Internet-supported course: How can instructors use them and what do they mean? Decision Sciences Journal of Innovative Education, 1, 159–179. doi:10.1111/j.15404609.2003.00016.x Bryant, K., Campbell, J., & Kerr, D. (2003). Impact of web based flexible learning on academic performance in information systems. Journal of Information Systems Education, 14(1), 41–50. Cao, J., Crews, J. M., Lin, M., Burgoon, J. K., & Nunnamaker, J. F. Jr. (2008). An empirical investigation of virtual interaction in supporting learning. The DATABASE for Information Systems, 39(3), 51–68. Conaway, R. N., Easton, S. S., & Schmidt, W. V. (2005). Strategies for enhancing student interaction and immediacy in online courses. Business Communication Quarterly, 68(1), 23–35. doi:10.1177/1080569904273300 Dineen, B. R. (2005). TeamXchange:Ateam project experience involving virtual teams and fluid team membership. Journal of Management Education, 29, 593–616. doi:10.1177/1052562905276275 Drago, W., Peltier, J., & Sorensen, D. (2002). Course content or the instructor: Which is more important in on-line teaching? Management Research News, 25(6/7), 69–83. doi:10.1108/01409170210783322 Friday, E., Friday-Stroud, S. S., Green, A. L., & Hill, A. Y. (2006). A multi-semester comparison of student performance between multiple traditional and online sections of two management courses. Journal of Behavioral and Applied Management, 8(1), 66–81.
54
Gagne, M., & Shepherd, M. (2001). A comparison between a distance and a traditional graduate accounting class. T.H.E. Journal, 28(9), 58–63. Hay, A., Peltier, J. W., & Drago, W. A. (2004). Reflective learning and online management education: A comparison of traditional and online MBA students. Strategic Change, 13(4), 169–182. doi:10.1002/jsc.680 Hayes, S. K. (2007). Principles of Finance: Design and implementation of an online course. Journal of Online Learning and Teaching, 3, 460–465. Heckman, R., & Annabi, H. (2005). A content analytic comparison of learning processes in online and face-to-face case study discussions. Journal of Computer-Mediated Communication, 10(2), article 7. Retrieved January 4, 2007, from http:// jcmc.indiana.edu/vol10/issue2/heckman.html Heckman, R., & Annabi, H. (2006). How the teacher’s role changes in on-line case study discussions. Journal of Information Systems Education, 17, 141–150. Hornik, S., & Tupchiy, A. (2006). Culture’s impact on technology-mediated learning: The role of horizontal and vertical individualism and collectivism. Journal of Global Information Management, 14(4), 31–56. doi:10.4018/jgim.2006100102 Hwang, A., & Arbaugh, J. B. (2006). Virtual and traditional feedback-seeking behaviors: Underlying competitive attitudes and consequent grade performance. Decision Sciences Journal of Innovative Education, 4, 1–28. doi:10.1111/j.15404609.2006.00099.x Kellogg, D. L., & Smith, M. A. (2009). Studentto-student interaction revisited: A case study of working adult business students in online courses. Decision Sciences Journal of Innovative Education, 7, 433–456. doi:10.1111/j.15404609.2009.00224.x
A Review of Research Methods in Online and Blended Business Education
Kock, N., Verville, J., & Garza, V. (2007). Media naturalness and online learning: Findings supporting both the significant- and no-significantdifference perspectives. Decision Sciences Journal of Innovative Education, 5, 333–355. doi:10.1111/j.1540-4609.2007.00144.x Lane, A., & Porch, M. (2002). Computer Aided Learning (CAL) and its impact on the performance of non-specialist accounting undergraduates. Accounting Education, 11, 217–233. doi:10.1080/09639280210144902 Liu, X., Magjuka, R. J., & Lee, S. (2008). The effects of cognitive thinking styles, trust, conflict management on online students’ learning and virtual team performance. British Journal of Educational Technology, 39, 829–846. doi:10.1111/j.1467-8535.2007.00775.x Malhotra, N. K. (2002). Integrating technology in marketing education: Perspective for the new millennium. Marketing Education Review, 12(3), 1–5. Marks, R. B., Sibley, S., & Arbaugh, J. B. (2005). A structural equation model of predictors for effective online learning. Journal of Management Education, 29, 531–563. doi:10.1177/1052562904271199 McDowall, T., & Jackling, B. (2006). The impact of computer-assisted learning of academic grades: An assessment of students’ perceptions. Accounting Education, 15, 377–389. doi:10.1080/09639280601011065 Morgan, G., & Adams, J. (2009). Pedagogy first: Making web technologies work for soft skills development in leadership and management education. Journal of Interactive Learning Research, 20, 129–155. Navarro, P. (2000). Economics in the cyberclassroom. The Journal of Economic Perspectives, 14, 119–132. doi:10.1257/jep.14.2.119
Navarro, P., & Shoemaker, J. (1999). The power of cyberlearning: An empirical test. Journal of Computing in Higher Education, 11(1), 29–54. doi:10.1007/BF02940841 Navarro, P., & Shoemaker, J. (2000a). Performance and perceptions of distance learners in cyberspace. American Journal of Distance Education, 14(2), 15–35. doi:10.1080/08923640009527052 Navarro, P., & Shoemaker, J. (2000b). Policy issues in the teaching of economics in cyberspace: Research design, course design, and research results. Contemporary Economic Policy, 18, 359–366. doi:10.1111/j.1465-7287.2000.tb00032.x Nemanich, L., Banks, M., & Vera, D. (2009). Enhancing knowledge transfer in classroom versus online settings: The interplay among instructor, student, content, and context. Decision Sciences Journal of Innovative Education, 7, 123–148. doi:10.1111/j.1540-4609.2008.00208.x Parikh, M., & Verma, S. (2002). Using Internet technologies to support learning: An empirical analysis. International Journal of Information Management, 22, 27–46. doi:10.1016/S02684012(01)00038-X Parthasurathy, M., & Smith, M. A. (2009). Valuing the institution: An expanded list of factors influencing faculty adoption of online education. Online Journal of Distance Learning Administration, 12(2). Retrieved October 15, 2009, from http:// www.westga.edu/~distance/ojdla/summer122/ parthasarathy122.html Peltier, J. W., Drago, W., & Schibrowsky, J. A. (2003). Virtual communities and the assessment of online Marketing education. Journal of Marketing Education, 25, 260–276. doi:10.1177/0273475303257762
55
A Review of Research Methods in Online and Blended Business Education
Piccoli, G., Ahmad, R., & Ives, B. (2001). Webbased virtual learning environments: A research framework and a preliminary assessment of effectiveness in basic IT skills training. Management Information Systems Quarterly, 25, 401–426. doi:10.2307/3250989 Saade, R., & Bahli, B. (2005). The impact of cognitive absorption on perceived usefulness and perceived ease of use in online learning: An extension of the technology acceptance model. Information & Management, 42, 317–327. doi:10.1016/j. im.2003.12.013 Sautter, P. (2007). Designing discussion activities to achieve desired learning outcomes: Choices using mode of delivery and structure. Journal of Marketing Education, 29, 122–131. doi:10.1177/0273475307302014 Schniederjans, M. J., & Kim, E. B. (2005). Relationship of student undergraduate achievement and personality characteristics in a total webbased environment: An empirical study. Decision Sciences Journal of Innovative Education, 3, 205–221. doi:10.1111/j.1540-4609.2005.00067.x Simmering, M. J., Posey, C., & Piccoli, G. (2009). Computer self-efficacy and motivation to learn in a self-directed online course. Decision Sciences Journal of Innovative Education, 7, 99–121. doi:10.1111/j.1540-4609.2008.00207.x Terry, N. (2000). The effectiveness of virtual learning in economics. Journal of Economics and Economic Education Research, 1, 93–99. Williams, E. A., Duray, R., & Reddy, V. (2006). Teamwork orientation, group cohesiveness, and student learning: A study of the use of teams in online distance education. Journal of Management Education, 30, 592–616. doi:10.1177/1052562905276740
56
Yoo, Y., Kanawattanachai, P., & Citurs, A. (2002). Forging into the wired wilderness: A case study of a technology-mediated distributed discussionbased class. Journal of Management Education, 26, 139–163. doi:10.1177/105256290202600203
KEY TERMS AND DEFINITIONS Control Variables: Research variables that may predict criterion variables that are not included in a set of predictor variables. Criterion (Dependent) Variables: Research variable(s) whose outcome result is associated with changes in predictor variables. Typically a variable of primary interest in a research study. Hierarchical Linear Modeling (HLM): A statistical technique that controls for effects of nesting of variables within specific contexts, such as courses, institutions, or academic disciplines. Multiple Regression Analysis: A statistical technique design to predict a criterion variable using a set of predictor variables. Predictor (Independent) Variables: Research variables that are associated and often assumed to be the cause of changes in criterion variables. Quasi-Experimental Comparative Designs: research designs where at least one treatment group and one control group are compared on variable(s) of interest but study participants are not randomly assigned to a group. Randomized Experimental Designs: research designs consisting of at least one treatment group and one control group where study participants are randomly assigned to each group. Structural Equation Modeling (SEM): A statistical approach that allows for measurement of relationships between observed variables using groupings of latent variables. The approach also allows researchers to consider potential effects of measurement error. Virtual Learning Environments: educational settings where the dissemination of course content and interaction between course participants is conducted at least partially online.
57
Chapter 4
An Introduction to Path Analysis Modeling Using LISREL Sean B. Eom Southeast Missouri State University, USA
ABSTRACT Over the past decades, there has been a wide range of empirical research in the e-learning literature. The use of multivariate statistical tools has been a staple of the research stream throughout the decade. Path analysis modeling is part of four related multivariate statistical models, including regression, path analysis, confirmatory factor analysis, and structural equation models. This chapter focuses on path analysis modeling for beginners using LISREL 8.70. Several topics covered in this chapter include foundational concepts, assumptions, and steps of path analysis modeling. The major steps in path analysis modeling explained in this chapter consist of specification, identification, estimation, testing, and modification of models.
INTRODUCTION Tremendous advances in information technology and the changing demographic profile of the student population have allowed colleges and universities to offer Internet-based courses as a way to meet the ever-increasing demand for higher and continuing education. In the early elearning systems developmental stage, the focus DOI: 10.4018/978-1-60960-615-2.ch004
of research was on the non-empirical dimensions of e-learning systems. E-learning systems include learning management systems, course management systems, and virtual learning environments. There are a wide range of free software and/or open source learning management systems (e.g., eFront), and course management systems (e.g., Dokeos, ILIAS, Moodle, etc.). Many well-known virtual learning environments are available to facilitate the creation of virtual class rooms (e.g., Blackboard, WebCT, FirstClass, Desire2Learn,
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
An Introduction to Path Analysis Modeling Using LISREL
CyberExtension, It’s Learning, WebTrain, etc.). Some universities have developed their own custom learning environments for creating and managing e-learning systems. Furthermore, they have spent heavily to constantly update their online instructional resources, computer labs, and library holdings. Now it is evident that the technology itself may not be an impediment anymore. The distance learning system can be viewed as having several human/non-human entities interacting together via computer-based instructional systems to achieve the goals of education, including perceived learning outcomes and student satisfaction. During the past decade, the volume of research in online and blended business education has increased dramatically. The most common e-learning research streams across business disciplines were outcome comparison studies with classroom-based learning and studies examining potential predictors of course outcomes (Arbaugh et al., 2009). The Dimensions and Antecedents of VLE Effectiveness introduced by Piccoli, Ahmad, and Ives (2001) contributed to developing new empirical research models. User satisfaction is the overall measure of the student’s perceived level of fulfillment in the online course. The review of e-learning empirical research indicates that there are numerous quantitative research methods that have been utilized. They include categorical data analysis using chi-square test, multivariate data analysis techniques including analysis of covariance (ANCOVA), General linear model multivariate analysis of covariance (MANCOVA), conjoint analysis, canonical correlation analysis, discriminant analysis, multiple regression analysis, path analysis, factor analysis (confirmatory vs. exploratory), structural equation modeling (SEM) using PLS graph and Smart PLS, LISREL, AMOS, and EQS. Moreover, qualitative research methods for e-learning empirical research have been applied to examine the effects of various factors or variables on the student satisfaction and learning outcomes. Qualitative research methods include
58
action research, case study research, the grounded theory approach, ethnographic research, etc.
MAIN FOCUS The use of multivariate statistical techniques has been a staple of the e-learning empirical research stream throughout the decade. This may reflect the fact that the field has drawn in some researchers who have been trained in such analytical techniques which are common in many business disciplines, and those scholars simply brought these techniques with them to design and analyze studies of online learning. Moreover, there is a growing number of studies that have used highly sophisticated statistical techniques such as structural equation models and hierarchal models in recent years (Arbaugh, Hwang, & Pollack, 2010). This chapter focuses on covariance based path analysis modeling using LISREL 8.70. Structural equation modeling (SEM) is “a comprehensive statistical approach to testing hypotheses about relations among observed and latent variables.”(Hoyle, 1995) SEM methodology is used to test four types of theoretical models: regression, path, confirmatory factor, and structural equation models. LISREL is capable of modeling all four models. All four models can be tested by following the five steps: specification, identification, parameter estimation, testing, and modification. To complement this chapter of path modeling, several other chapters are concerned with path modeling applications, an introduction to SEM using PLS graph, and SEM applications. The remainder of this chapter is organized by the following several sections. •
•
Foundational concepts – observed and latent variables, dependent and independent variables, and regression models Assumptions
An Introduction to Path Analysis Modeling Using LISREL
Figure 1. A classification of SEM models
•
•
Path modeling steps (specification, identification, parameter estimation, testing, and modification Summary
FOUNDATIONAL CONCEPTS As shown in Figure 1, a family of SEM techniques consist of regression, path, confirmatory factor, and structural equation models. All of these models can be classified by the three criteria (the observability of variables, the existence of mediating variables, and the need for path analysis with latent variables). There are already abundant sources of references on path analysis modeling. This chapter is an introduction to path analysis modeling for a beginner. For those audiences in mind, this section discusses several foundational
concepts to help better understand the concept and procedures in path analysis modeling.
Observed Variables and Latent Variables Observed variables are variables that can be measured or observed directly. For example, to establish a theoretical relationship between elearning system quality and user satisfaction, a list of questionnaires was designed. Each of the six questions below is considered to be observed variable or indicator variable. Latent variables are not observed or measured directly. Rather they can be measured indirectly through several observed variables that can be measured using surveys, tests, interviews, etc. For example, system quality and user satisfaction are latent variables (constructs, or factors) that are indirectly inferred
59
An Introduction to Path Analysis Modeling Using LISREL
from questionnaire 1, 2, and 3 and questionnaire 4, 5, and 6 respectively.
System Quality 1. The e-learning system is easy to use. 2. The e-learning system is user-friendly. 3. The e-learning system provides interactive features between users and system.
•
The dependent variables are the variables to be predicted and denoted as “y”. They are also known as: ◦⊦ Endogenous variables. ◦⊦ Response variables ◦⊦ Effect variables
Multiple Regression Analysis
Both observed and latent variables can be classified into either exogenous/independent or endogenous/ dependent variables. The former comes from the Greek words “exo” and “gen”, which mean “outside” and “production”, referring to uncontrollable variables coming from outside of the model, therefore their values are given. Endogenous or dependent variables are the opposite of exogenous and their values are dependent on exogenous or independent variables. User satisfaction may be dependent on system quality and other exogenous variables. A beginner of structural equation modeling may encounter many different terms referring to dependent and independent variables.
From a historical viewpoint, a family of SEM techniques consists of regression, path, confirmatory factor, and structural equation models in the chronological order of development. Each successive model is capable of modeling more complex relationships among variables and use previous models. For example, path models use regression analysis and correlation coefficients. Structural equation models use all previous models (regression, path, and confirmatory factor) to model complex relationships among latent variables. In order to understand path models, it is necessary to understand multiple regression models. Therefore, this section briefly introduces regression models. Multiple regression analysis is a statistical technique to define the linear relationship between a dependent variable and multiple independent variables. The regression model can be classified into either the simple regression model or the multiple regression model. The simple regression model involves only one independent variable and one dependent variable. The multiple regression model has one dependent variable and multiple independent variables. A probabilistic regression model includes error terms. The simple regression model can be written as:
•
y = ß0 + ß1x + є
User Satisfaction 4. The academic quality was on par with faceto-face courses I’ve taken. 5. I would recommend this course to other students. 6. I would take an on-line course at this university again in the future.
Dependent Variables and Independent Variables
60
The independent variables are denoted as “x”. They are also known as: ◦⊦ Exogenous variables ◦⊦ Explanatory variables ◦⊦ Predictor variables ◦⊦ Cause variables
where ß0 = the population intercept, ß1= the population slope, and є = an error term.
An Introduction to Path Analysis Modeling Using LISREL
In most cases of regression analyses, the sample data are used to estimate the population intercept and slope. Therefore, using the sample data, the equation for the simple regression line is as follows. y = b0 + b1x where b0 = the sample intercept and b1= the sample slope. The multiple regression model can be written as: y = ß0 + ß1x1 + ß2x2 … ßkxk + є where ß0 = the population intercept ß1= the partial regression coefficient for independent variable x1 ß2= the partial regression coefficient for independent variable x2 ßk= the partial regression coefficient for independent variable xk k= the number of independent variables, and є = an error term (also known as the disturbances).
PATH ANALYSIS MODELING AND ITS ASSUMPTIONS Multiple regression models examine the relationship between several directly observable independent variables and a directly observable dependent variable. It is not possible to estimate
the relationship between dependent variables using multiple regression models. For example, in Figure 4, system use and user satisfaction are endogenous variables which depend on the values of systems quality and information quality. Regression models allow us to estimate the relationship between a single dependent variable (system use) and multiple independent variables (system quality and information quality). However, when researchers are interested in testing the theoretical relationships among observed variables as depicted in Figure 5, which includes the relationships among dependent variables and mediating variables (system use and user satisfaction), a path analysis model needs to be used. There is not a stand-alone software package to do path analysis modeling only. Structural equation modeling packages are available to do the whole range of structural equation modeling, including path analysis. Readers are referred to the following. • • • •
http://www.mvsoft.com/index.htm (EQS software) http://www.mvsoft.com/eqsBooks.htm (EQS books) http://www.spss.com/amos/ (AMOS software) http://www.ssicentral.com/ (LISREL software)
A path model tests theoretical relationships among multiple variables when the following conditions are met. First, there is temporal ordering of variables. Second, covariation or correlation exists among variables. Third, variables are quantitative (interval or ratio level). Path analysis modeling assesses the causal contribution of directly observable variables to other directly observable variables. Unlike structural equation models that are concerned with latent variables, path analysis models examine the causal contribution of directly observable variables.
61
An Introduction to Path Analysis Modeling Using LISREL
Since path models are a logical extension of regression models, path models and regression models share the same assumptions. 1. Linearity— path models assume that the relationship among variables is linear. Consequently, the regression and path analyses cannot be applied if the relationship between two variables is nonlinear such as a curvilinear relationship, which is characterized by curved lines. 2. Data is normally distributed. Another important assumption is that all variables in path models are normally distributed with some important characteristics. The normal distribution is a continuous, symmetrical distribution about its mean. It is also asymptotic to the horizontal axis. 3. Disturbances (a.k.a., residuals), usually denoted as ei, or єi are also normally distributed random variables. The disturbance is the difference between the actual value of dependent variable (y) and the predicted value of ŷ at a given value of independent value of x. The disturbance includes the effects of all unknown factors which influence the value of y. The sum of the residuals is zero. Therefore, the mean or expected value of the ei is also zero. This assumption will always be true because the regression line is estimated by the least-squares method. 4. Each of the disturbances along the regression line has a constant variance regardless of the value of x. See (Dielman, 1996, pp. 82-83.) for the detailed discussion of this assumption and others. 5. The disturbances are independent. This means that the disturbances/residual variables are not correlated with any variable in the model and that a residual plot should not display any systematic pattern.
62
Sample Size Sample size is an important consideration to ensure that the purpose of the study can be fully accomplished. The number of parameters is an important input to decide the size of a sample that can assess the significance of testing model effects. Kline (1998) suggested the following. • • •
Ideal sample size= the number of parameters * 20 or more Adequate sample size= the number of parameters * 10 or more Insufficient sample size = the number of parameters * 5 or less
The type of parameters in path analysis and structural equation models includes the following: • • • •
Path coefficients Equation error variances Correlation among the independent variables Independent variable variances
The number of each type of parameter can be counted from the specified path model. The model identification section shows an example of how to count the number of the different types of parameter. According to Schumacker and Lomax (2010, p.211), “In traditional multivariate statistics, the rule of thumb is 20 subjects per variables (20:1). The rule of thumb used in structural equation modeling vary from 100, 200, to 500 or more subjects per study, depending on model complexity and cross-validation requirements.” Hair et al. (Hair, Black, Babin, & Anderson, 2010) suggested the following: • •
Minimum sample size of 100 with the model of five or fewer constructs. Minimum sample size of 150 - 300 with the model of seven or fewer constructs.
An Introduction to Path Analysis Modeling Using LISREL
•
Minimum sample size of 500 with the model of large number of constructs.
PATH MODELING STEPS There are several steps involved in path modeling: specification, identification, estimation, testing, and modification.
Specification Model specification is the first step of identifying all relevant variables in the model and examining the proper relationships among the identified variables. Typically before this step of model specification, a complete survey of the literature on the specific research area must be conducted to identify major issues for further investigation. The identification of all relevant variables specifically refers to including only necessary variables and excluding extraneous variables. Needless to say, either including extraneous variables or excluding essential variables will lead to a path model that may not truly reflect the causal relationships among variables. Moreover, it can affect the path coefficient significantly, and therefore the model result can be unreliable. Building a parsimonious model with a few substantive meaningful paths is also an important consideration in path modeling. A theoretical model in structural equation modeling (SEM) with all paths specified is of limited interest. This is the case of saturated model, which will indicate little difference between the sample variancecovariance matrix and the reproduced implied covariance matrix (Schumacker & Lomax, 2004). Figure 2 also demonstrates common path diagram symbols. An observed variable is enclosed by either a rectangle or a square. A measurement error in an observed variable is denoted by an oval shape with an attached unidirectional arrow pointing to that observed variable. Single head arrows indicate unidirectional (recursive) paths. A
reciprocal (non-recursive) path between variables can be drawn by using two arrows with different directions. Double headed or curved arrows denote correlation between variables. Figure 3 demonstrates all possible hypothetical models among three variables (x1, x2, and y), taken from (Schumacker & Lomax, 2004). The path model examines the relationships among only observed variables. However, structural equation models include not only observed variables but also latent variables (constructs or factors) which are measured indirectly by combining several observed variables. All variables, whether they are latent or observed, can be classified as either exogenous or endogenous variables. The term “exogenous” comes from the Greek words “exo” and “gen”, meaning “outside” and “production”. Exogenous variables are the variables with no causal links (arrows) leading to them from other variables in the model. Exogenous variables are derived from outside a system and therefore not controllable (independent). Endogenous variables originate from within a system and they are controllable and affected by exogenous variables. A unidirectional path is drawn from an exogenous variable to an endogenous variable.
An Example of Specification To further demonstrate various aspects of path modeling, we will introduce a simple path analysis model to help the readers understand basic concepts, modeling procedures, and an interpretation of the LISREL outputs. To conduct a path analysis, the following 7 questions are selected from a survey instrument of multiple questions. A survey questionnaire was developed using a 7 point Likert Scale. Students responded to questions based on their level of agreement for each statement ranging from strongly disagree to strongly agree: 1 (strongly disagree), 2 (disagree), 3 (somewhat disagree),
63
An Introduction to Path Analysis Modeling Using LISREL
Figure 2. Path model symbols
4 (neutral), 5 (somewhat agree), 6 (agree), and 7 (strongly agree). • • • • •
System Quality: The system is user-friendly. Information Quality: The system provides information that is exactly what you need. System Use: I frequently use the system. User Satisfaction: Overall, I am satisfied with the system. Learning Outcome: I feel that online learning is equal to the quality of traditional classroom learning.
With the absence of latent variables in the model, two analysis techniques are available
64
(see Figure 1): regression analysis and path analysis. The presence of mediating variables in the specified model determines which one of the two techniques can be applied. Figure 1 shows that the path model is an appropriate one. In Figure 4 of bivariate regression model, there is no mediating variable. There are two exogenous (or independent) variables on the left-hand side and two endogenous variables on the right hand side. The path analysis model (Figure 5) contains two mediators (or mediating variables). They are system use and user satisfaction. In the path analysis model, the system quality influences the system use, which in turn influences user satisfaction and e-learning outcomes. The system use is said to be
An Introduction to Path Analysis Modeling Using LISREL
a mediating variable between the system quality and e-learning outcomes. The user satisfaction serves as a mediator between information quality and e-learning outcomes, system quality and elearning outcomes, and system use and e-learning outcomes. Our model in Figure 5 examines the relationships among five observed variables. The two independent variables are system quality and information quality. The two mediating variables are system use and user satisfaction. The dependent variable is e-learning outcomes. Using an hypothetical model of three variables used in (Schumacker & Lomax, 2004), two independent variables (x1 and x2) and one dependent variable (y), five different models are specified (Figure 3). The first three models are built on the assumption that x1 influences x2. The last two models from the bottom assume that x1 does not influence x2. With another assumption that x2 influences x1, it is possible to add several other additional models. Despite the numerous possible theoretical models, the model specification must be guided by the theory that is based on literature review.
Figure 3. Possible three-variable path models (Source: Schumacker & Lomax, 2004, pp.154155)
Identification The next step, after the specification of a path model, is to decide whether the model is identifiable. The identifiability of a path model can be determined by comparing the number of parameters to be estimated (unknowns) and the number of distinct values in the covariance matrix (knowns). If the number of knowns is equal to the number of unknowns in the path model, it is called “just identified”. If the number of unknowns is less than the number of knowns, it is the case of “over-identified” model. The last case, “under identified” model, occurs when the number of unknowns is more than the number of knowns (see Table 1). The term determination is often used interchangeably with the term identification. Therefore, the terms justdetermination, underdetermination, and overdetermination are
used with or without hyphens. It is critical to avoid building the under-justified model. If a path model is underidentified, a unique path solution cannot be computed. Model is just-identified if number of parameters is equal to number of distinct values in the covariance matrix. The output will indicate that path analyses are saturated models and therefore chi squares = 0 and degrees of freedom =0. Figure 5 illustrates the case of just-identified model.
65
An Introduction to Path Analysis Modeling Using LISREL
Table 1. Three cases of model identification problems Parameters to be Estimated (Unknowns)
Distinct Values in the Covariance Matrix (Knowns)
Model Identification Type Under Identified
>
Just Identified
=
Over Identified
<
Figure 4. Bivariate regression model
Figure 5. Path analysis model
66
An Introduction to Path Analysis Modeling Using LISREL
There are a total of 15 free parameters to be estimated. • • • •
Path coefficients – 9 Equation error variances – 3 Correlation among the independent variables –1 Independent variable variances – 2
The number of distinct values in the covariance matrix is equal to p(p+1)/2= [5(5+1)]/2=15, where p is the number of observed variables. Therefore, the model is just identified. This is an example of saturated model. Saturated model has χ 2 = 0. Output will indicate a perfect fit. Goodness of Fit Statistics Degrees of Freedom = 0 Minimum Fit Function Chi-Square = 0.0 (P = 1.00) Normal Theory Weighted Least Squares Chi-Square = 0.00 (P = 1.00) The Model is Saturated, the Fit is Perfect!
Schumacker and Lomax discussed the meaning of saturated model in structural equation modeling (SEM) this way (Schumacker & Lomax, 2004, pp. 82-83): A chi-square value of zero indicates a perfect fit, or no difference between values in the sample covariance matrix S and the reproduced implied covariance matrix ∑ that was created based on the specified theoretical model. Obviously, a theoretical model in SEM with all path specified is of limited interest (saturated model). The goal in structural equation modeling is to achieve a parsimonious model with a few substantive meaningful paths and a non-significant chi-square value close to this saturated model value, thus indicating little difference between the sample variance-covariance matrix and the reproduced implied covariance matrix. The difference between
these two covariance matrixes is contained in a residual matrix. When the chi-square value is non-significant (close to zero), indicating that the theoretical specified model fits the sample data, hence there is little difference between the sample variance –covariance matrix and the model- implied reproduced variance-covariance matrix. Figure 6 illustrates re-specified path analysis model by deleting two paths. This model now becomes an over-identified model because the total number of distinct values in the covariance matrix (15) is greater than the number of parameters to be estimated (13).
Estimation Model identification is followed by model estimation. This step estimates the parameters of theoretical models. Figure 7 shows that the first user-interface screen of LISREL 8.2. It has only three submenus – file, view, and help. The LISREL command file must be created first by clicking the file menu. Figure 7 shows dropdown menus of several choices. By clicking file, you will begin to build a SIMPLIS (SIMPLe LISrel) file. LISREL 8 allows the user to use two different command languages (LISREL command and SIMPLIS command) to build an input file. The SIMPLIS command language is easy to use and learn and it becomes available with LISREL version 8 (Jöreskog & Sörbom, 1993).
The SIMPLIS Command Language Syntax The SIMPLIS command language syntax consists of the following. • • • •
Title: Variables: Observed variables: Raw data/covariance matrix/correlations matrix:
67
An Introduction to Path Analysis Modeling Using LISREL
Figure 6. Re-specified path analysis model
• • •
Sample size: Relationships/paths End of problem*
The title is optional. It can be a single or multiple lines. Variables are used to define acronyms to be used later in the observed variables section. Data for LISREL 8 can be entered using one of the following types: raw data, covariance matrix, and correlation matrix. Raw data can be placed as part of input file. However, it is recommended that the raw data be stored in an external file and using the following command line. Raw data from File filename The next section specifies the sample size. A total of 674 valid unduplicated responses were used to fit the path analysis model. This can be done with many different formats.
68
• • •
Sample size 674 Sample size = 674 Sample size: 674
The next section defines the relationships among observed variables. This can be done by using two different formats of either paths or relationships. The path model in Figure 6 can be specified using one of the two formats below. Paths sq -> su us iq -> su us su -> us outcome us -> outcome Relationships Outcome = su us us = sq iq su
An Introduction to Path Analysis Modeling Using LISREL
Figure 7. LISREL windows application screen
su = sq iq Command file is saved with SPL extension. LISREL has two types of user interfaces- the drop down menu and the graphical user interfaces (GUI). The GUI interfaces has 12 buttons. To run this command language file (introtopathanalysisdatafromfile.spl), the user clicks the fifth button with “L” from left. Figure 8 shows the contents of data file (jimsurvey.dat). The data file does not include variable names. This can be stored at the same directory where your command language file is stored or at different locations (Figure 9).
Testing and Modification Model testing tests the fit of the correlation matrix against the theoretical causal model built by researchers based on the extant literature. The outputs from running path analysis models are:
(1) the path analysis model with SPL extension, (2) the output file with OUT extension, and (3) the path model graphical file with PTH extension. The output file contains a wide range of useful information, including multiple fit indices available to test the model (see Figure 10).
The LISREL Output File The output file contains the following information. • •
• •
Covariance Matrix LISREL Estimates using Maximum Likelihood ◦⊦ Structural equations ◦⊦ Reduced form equations Covariance Matrix of Independent Variables Covariance Matrix of Latent Variables
69
An Introduction to Path Analysis Modeling Using LISREL
Figure 8. Path analysis model editing screen
Figure 9. Data file for the path analysis model
•
Goodness of Fit Statistics
Prior to examining LISREL estimates, it makes sense to examine the goodness of fit statistics first. If the fit statistics do not indicate good fit, the researchers may modify the previous model. As Figure 11 shows, the goodness of fit statistics include an extensive array of fit indices that can be categorized into six different subgroups of statistics that may be used to determine model fit. For a very good overview of LISREL goodnessof-fit statistics, readers are referred to (Byrne, 1998, pp.109-119.; Hooper, Coughlan, & Mullen, 2008). In regard to the choice of indices to assess the model fit, Byrne (1998, p.118) states: Having worked your way through this smorgasbord of goodness-fit measures, you are likely
70
An Introduction to Path Analysis Modeling Using LISREL
Figure 10. Path analysis output
feeling totally overwhelmed and wondering what to you do with all this information! Although you certainly do not need to report entire set of fit indices, such an array can give you a good sense of how well your model fits the sample data. But how does one choose which indices are appropriate in the assessment of model fit? Unfortunately, this choice is not simple one, largely because of particular indices have been shown to operate somewhat differently give the sample size, estimation procedure, model complexity, violation of the underlying assumptions of multivariate normality and variable independence, or any combination thereof.
There seems to be an agreement among SEM researchers that it is not necessary to report every goodness of fit statistics from path analysis output (Figure 11). For the SEM beginners, it is not an easy task to choose a set of fit indices in the assessment of model fit, due to the complexity and multiple dimensions involved in the choice of good indices (J. S. Tanaka, 1993). Although there are no golden rules that can be agreed upon, readers are referred to the appendix of chapter X of this book (Testing of the DeLone-McLean Model of Information System Success in an E-Learning Context) for a brief discussion on several indices that have been frequently reported and suggested to be reported in the literature (Boomsma, 2000; Crowley & Fan, 1997; Hayduk, Cummings,
71
An Introduction to Path Analysis Modeling Using LISREL
Figure 11. Goodness of fit statistics from path analysis output
Boadu, Pazderka-Robinson, & Boulianne, 2007; Hooper, Coughlan, & Mullen, 2008; Rex B. Kline, 2005; McDonald & Ho, 2002).
LISREL Parameter Estimates Path analysis using LISREL estimates the coefficients of a set of linear structural equations. The path analysis output shows two different outputs from structural equations and reduced form equations. The structural equations comprised of independent (cause) variables and dependent (effect) variables. A single regression equation model and bivariate regression model can be analyzed by path analysis of LISREL. Outputs
72
from the single regression and bivariate regression analysis include only structural equations and the estimated relationships between the effect variables and cause variables. However, our model outputs list two equations (structural form equations and reduced form equations) and the two estimated relationships of each equation because the path model includes the structural form equations that define the relationships among the cause variables. In the model, system use, user satisfaction, and self-regulated learning behavior are endogenous, but they are also intervening variables. If covariance among measurement errors of three intervening variables equals zero, we can apply ordinary least square (OLS). However,
An Introduction to Path Analysis Modeling Using LISREL
structural equation models assume there exists covariance among measurement errors of three intervening variables. LISREL estimates all structural coefficients simultaneously, not separately. Since our model is tested based on a sample size of 674, Chi-Square statistic is not a good measure of goodness of fit. Chi-Square statistic nearly always rejects the model when large samples are used(P.M. Bentler & Bonnet, 1980). The RMSEA is the second fit statistic reported in the LISREL program. A cut-off value close to.069 indicates a close fit, and the values up to 0.08 are considered to represent reasonable error of approximation (Jöreskog & Sörbom, 1993). The LISREL output sections provide us with two different sections: structural equations and reduced form equations. The structural equations consist of all the equations including mediating variables (system use and user satisfaction). The reduced form equations show only the effects of exogenous (independent) variables on endogenous variables.
Structural Equations The term, structural equations, refers to the simultaneous equations in a model. It is also known as multiequations. It refers to the equation that contains mediating variables. The mediating variable functions as the dependent (response) variable in one equation. At the same time, it functions as the independent variable (predictor) in another equation. Structural equations outputs show direct effects and indirect effects of all exogenous variables and mediating variables. The output below shows that system quality, information quality, computer self-efficacy have no effects on the e-learning outcomes. Structural equations also illustrate the effects of endogenous variables on each other. It indicates that only one variable (self-efficacy) positively influences the e-learning outcomes. System quality, information quality, user satisfaction have no effect on the use of e-learning
systems. User satisfaction is positively influenced by system quality, information quality, and selfmanaged learning behavior. The perceived leaning outcomes are positively influenced by user satisfaction, self-managed learning behavior, and self-efficacy.
Path Coefficients To interpret the output from LISEL, a path coefficient/path weight must be understood. A path coefficient is the standardized partial regression coefficient for an independent variable. In Figure 12, the estimated path coefficients are in front of the * before each variable. To demonstrate how to interpret the LISREL out, we will use the first line of LISREL outputs. su = 0.15*sq + 0.27*iq, Errorvar.= 0.79,R² = 0.22 (0.040) (0.039) 3.85
6.91
(0.043) 18.32
The e-learning system use (su) is expected to increase by 0.15 on the average if the perception of students in regard to the quality of e-learning systems (sq) measured by “the system is userfriendly” increases one unit in Likert 7 point scale when other variable (iq) is remained fixed.
Standard Errors of the Estimate The number enclosed by parenthesis below the path coefficients is the standard error of the estimate. It is the standard deviation of residuals (error) for the regression model. The residual analysis tests whether the regression line is a good fit of the data (Black, 2008). The residual of the regression/path models are defined as the difference between the y value and the predicted value, ŷ. The sum of all the residuals is always zero. Therefore, to measure the variability of y, the next step is to compute the standard error of the estimate (the standard deviation of residuals/ error for the regression model) to find the sum of
73
An Introduction to Path Analysis Modeling Using LISREL
Figure 12. Structural equations and reduced form equations
squares of error (residual) (SSE). The final step to finding the standard error of the estimate (Se) is to divide SSE by the degrees of freedom of errors for the model and take of the square root of the value from the previous step. •
Residual = y- ŷ
•
SSE = ∑ (y- ŷ)2
•
Se =
SSE n −k −1
where n = number of observations k = number of independent variables
74
t–values and Their Interpretation The numbers below the standard errors of the estimate are the t-values. It is the ratio that can be obtained by dividing the path coefficients by standard errors of estimates. In Figure 12, the first t-value (3.85) is the ratio between the estimate (path coefficient) and its standard error of the estimate (.15/.04=3.75). This ratio (3.75) is not the same as 3.85 in Figure 12, because the path coefficient (.15) and its standard error of estimate (.040) are not used to produce the t-value 3.85. Instead of.15, the path coefficient could have been different numbers such as.15335 or.14997, etc. The high t-value indicates that the path coefficient is non-zero (significant). What is the threshold value of t? In structural equation modeling, the rule is to↜use a critical value. AMOS uses critical ratio (C.R.) values (Byrne, 2010). LISREL uses
An Introduction to Path Analysis Modeling Using LISREL
Figure 13. Covariance matrix of independent variables
T value. However, this T must not be confused with the student t distribution. In inferential statistics, when population standard deviation (σ) is unknown and the population is normally distributed, the estimation of the population mean can be done by using the student t distribution which is developed by William S. Gosset. In SEM, including path analysis modeling, the T value that is typically used is T > 1.96. Traditionally these critical values are called t values, but they use z critical values. This is just a historical artifact in the field (Eom, 2004). A hypothesis test for the regression coefficient of each independent variable is to determine whether it is zero or not. Therefore, the null hypothesis is: ß1 = 0. The alternative hypothesis is: ß1 ≠ 0. The same hypothesis is developed and tested for all independent variables. When the alternative hypotheses contain ≠, this is the case of two tailed tests. With the specified type 1 error rate (α) =.05, the rejection region is in the two ends of the standard normal distribution area (2.5% in each area). This value of 1.96 indicates that the type 1 error rate, or alpha (α), is 5% in the two ends of the distribution curve. Therefore, the critical z value is: zα/2 = ± 1.96. This z = 1.96 covers 95% of areas of the standard normal distribution. Using the z value of 1.96, all the path coefficients in the model are significant, with a probability of 5% of making type 1 error (rejecting a true null hypothesis).
Coefficient of Multiple Determination (R2) This is a measure of fit for the regression model. R2 is “the proportion of variation of the dependent variable (y), accounted for by the independent variables in the regression models.”(Black, 2008). The value of R2 ranges 0 to 1. R2 = regression sum of squares (SSR) / total sum of squares (SST) Total sum of squares (SST) = explained sum of squares (SSR) + unexplained sum of squares (SSE)
Path Diagram LISREL 8.70 produces a path diagram as shown in Figure 14. The path diagram provides us with important information. The headed arrow in Figure 14 shows that covariance between system quality (sq) and information quality (iq) is 1.02. The variances of sq and iq are 1.42 and 1.5, respectively. The same information can also be obtained from the covariance matrix of independent variables (Figure 13). Single headed arrows are path coefficients from structural equations in Figure 12. For example, the first structural equation defines the relationship between su and two independent variables.↜ sq =.15*sq +.27*iq, errovar. =.79
75
An Introduction to Path Analysis Modeling Using LISREL
Figure 14. Path diagram
These path coefficients are displayed in Figure 14. The right hand side of the path diagram also includes measurement errors for each endogenous variable (.79,.51, and 3.00). The bottom of the path diagram shows four fit indices. The p-value is a way of testing hypotheses. This is also known as the observed significance level. This p-value is the smallest value of α for which the alternative hypothesis can be accepted.
Reduced Form Equations The reduced form equation of a path model is the result of rearranging algebraically independent variables and dependent variables so that each dependent (endogenous) variable is on the left side of one equation, and only independent (exogenous) variables are on the right side. Reduced form equations provide us with the equations of the endogenous variables (left-hand side) in terms of the total causal effect of the two exogenous variables. Therefore, the effects of mediating variables (system use and user satisfactions) on the effect variables are not shown. The first effect variable (system use) is explained by the two cause variables (system quality and information quality). With the absence of any
76
mediating variables, this equation of system use (su) appeared under both structural equations and reduced form equations with identical path coefficients.
Effect Decomposition The second effect variable (user satisfaction) is explained only by the total effects of two cause variables, excluding the direct effect of the mediating variable (system use). The total effects are computed by combining direct and indirect effects of the two cause variables. •
•
•
Direct effects of system quality on user satisfaction is.39 (taken from the second equation of structural equations) Indirect effects of systems quality on user satisfaction can be computed by multiplying the path coefficients of compound path. The compound path in this case is (system quality → system use → user satisfaction). Therefore the indirect effects (.0195) are computed by.15*.13. Total causal effect of system quality on user satisfaction is.4095 (.39+.0195). In Figure 12, it is shown as.41.
An Introduction to Path Analysis Modeling Using LISREL
The total effects of E-learning outcomes consist of several indirect paths from two cause variables. The indirect effects from information quality are decomposed as follows. • • •
•
Information quality → system use → elearning outcome: (.27*.2=.054) Information quality → user satisfaction →e-learning outcomes (.44*.54=0.2376) Information quality → system use → user satisfaction → e-learning outcome ↜(.27*.13*.54= 0.018954) Total indirect effects from information quality: (.054+.2376+.018954=.310554)
The indirect effects from system quality are decomposed as follows. • • •
•
system quality → system use → e-learning outcome: (.15*.02=.03) system quality → user satisfaction →elearning outcome: (.39*.54=.2106) system quality → system use → user satisfaction → e-learning outcome: (.15*.13*.54=.01053) Total indirect effects from information quality: (.03+.2106+.01053=.310554)
SUMMARY A family of path modeling techniques consists of regression analysis, path analysis, factor analysis, and structural equation modeling. This chapter focuses on path analysis modeling using LISREL 8.70. SEM methodology is built on all three related multivariate statistical techniques (regression, path, and confirmatory factor models). LISREL is capable of modeling all four techniques in the steps of specification, identification, parameter estimation, testing and modification. A simple path analysis model is introduced to investigate the relationships among information quality, system quality, systems use, and e-learning outcomes.
This chapter demonstrates the various aspects of path modeling to help the readers understand basic concepts, modeling procedures, and interpretation of LISREL outputs.
REFERENCES Arbaugh, J. B., Godfrey, M. R., Johnson, M., Pollack, B. L., Niendorf, B., & Wresch, W. (2009). Research in online and blended learning in the business disciplines: Key findings and possible future directions. The Internet and Higher Education, 12, 71–87. doi:10.1016/j.iheduc.2009.06.006 Arbaugh, J. B., Hwang, A., & Pollack, B. L. (2010). A review of research methods in online and blended business education: 2000-2009. In Eom, S. B., & Arbaugh, J. B. (Eds.), Student satisfaction and learning outcomes in e-learning: An introduction to empirical research. Hershey, PA: IGI Global. Bentler, P. M. (1990). Comparative fit indices in structural models. Psychological Bulletin, 107, 238–246. doi:10.1037/0033-2909.107.2.238 Bentler, P. M., & Bonnet, D. C. (1980). Significance tests and goodness of fit in the analysis of covariance structures. Psychological Bulletin, 88(3), 588–606. doi:10.1037/0033-2909.88.3.588 Black, K. (2008). Business statistics for contemporary decision making (5th ed.). Hoboken, NJ: Wiley. Bollen, K. A. (1989). A new incremental fit index for general structural models. Sociological Methods & Research, 17, 303–316. doi:10.1177/0049124189017003004 Boomsma, A. (2000). Reporting analyses of covariance structures. Structural Equation Modeling, 7(3), 461–483. doi:10.1207/ S15328007SEM0703_6
77
An Introduction to Path Analysis Modeling Using LISREL
Byrne, B. M. (1998). Structural equation modeling with LISREL, PRELIS, and SIMPLIS: Basic concepts, applications, and programming. Mahwah, NJ: Lawrence Erlbaum. Byrne, B. M. (2010). Structural equation modeling with AMOS: Basic concepts, applications, and programming (2nd ed.). New York, NY: Routledge Academic. Crowley, S. L., & Fan, X. (1997). Structural equation modeling: Basic concepts and applications in personality assessment research. Journal of Personality Assessment, 68(3), 508–531. doi:10.1207/ s15327752jpa6803_4 Diamantopoulos, A., & Siguaw, J. A. (2000). Introducing LISREL. London, UK: Sage Publications. Dielman, T. E. (1996). Applied regression analysis for business and economics (2nd ed.). Belmont, CA: Wadsworth Publishing Company. Eom, S. B. (2004). Personal communication with Richard Lomax in regard to the use of T value in structural equation modeling through e-mail. Hair, J. F., Black, W. C., Babin, B. J., & Anderson, R. E. (2010). Multivariate data analysis (7th ed.). Upper Saddle River, New Jersey: Prentice Hall. Hayduk, L., Cummings, G. G., Boadu, K., Pazderka-Robinson, H., & Boulianne, S. (2007). Testing! Testing! One, two three – testing the theory in structural equation models! Personality and Individual Differences, 42(2), 841–850. doi:10.1016/j.paid.2006.10.001 Hooper, D., Coughlan, J., & Mullen, M. R. (2008). Structural equation modeling: Guidelines for determining model fit. The Electronic Journal of Business Research Methods, 6(1), 53–60. Hoyle, R. H. (1995). The structural equation modeling approach: Basic concepts and fundamental issues. In Hoyle, R. H. (Ed.), Structural equation modeling: Concepts, issues, and applications (pp. 1–15). Thousand Oaks, CA: Sage Publications.
78
Hoyle, R. H., & Panter, A. T. (1995). Writing about structural equation models. In Hoyle, R. H. (Ed.), Structural equation modeling: Concepts, issues, and applications (pp. 158–176). Thousand Oaks, CA: Sage Publications. Hu, L. T., & Bentler, P. M. (1999). Cutoff criteria for fit indexes in covariance structure analysis: Conventional criteria versus new alternatives. Structural Equation Modeling, 6(1), 1–55. doi:10.1080/10705519909540118 Jöreskog, K. G., & Sörbom, D. (1989). LISREL 7 user’s reference guide. Chicago, IL: SPSS Publications. Jöreskog, K. G., & Sörbom, D. (1993). LISREL 8: Structural equation modeling with the SIMPLIS command language. Hillsdale, NJ: Lawrence Erlbaum Associates Publishers. Kline, R. B. (1998). Principles and practice of structural equation modeling. New York, NY: Guilford Press. Kline, R. B. (2005). Principles and practice of structural equation modeling (2nd ed.). New York, NY: The Guilford Press. MacCallum, R. C., Browne, M. W., & Sugawara, H. M. (1996). Power analysis and determination of sample size for covariance structure modeling. Psychological Methods, 1, 130–149. doi:10.1037/1082-989X.1.2.130 McDonald, R. P., & Ho, M.-H. R. (2002). Principles and practice in reporting statistical equation analyses. Psychological Methods, 7(1), 64–82. doi:10.1037/1082-989X.7.1.64 Piccoli, G., Ahmad, R., & Ives, B. (2001). Webbased virtual learning environments: A research framework and a preliminary assessment of effectiveness in basic IT skills training. Management Information Systems Quarterly, 25(4), 401–426. doi:10.2307/3250989
An Introduction to Path Analysis Modeling Using LISREL
Schumacker, R. E., & Lomax, R. G. (2004). A beginner’s guide to structural equation modeling (2nd ed.). Mahwah, NJ: Lawrence Erlbaum Associates, Publishers. Schumacker, R. E., & Lomax, R. G. (2010). A beginner’s guide to structural equation modeling (3rd ed.). New York, NY: Routledge, Taylor and Francis Group. Tanaka, J. S. (1993). Multifaceted conceptions of fit in structural equation models. In Bollen, K. A., & Long, J. S. (Eds.), Testing structural equation models (pp. 10–39). Newbury Park, CA: Sage Publications. Tanaka, J. S., & Huba, G. J. (1984). Confirmatory hierarchical factor analyses of psychological distress measures. Journal of Personality and Social Psychology, 46, 621–635. doi:10.1037/00223514.46.3.621
ADDITIONAL READING Byrne, B. M. (1994). Structural Equation Modeling with Eqs and Eqs/Windows. Thousand Oaks: Sage Publications. Byrne, B. M. (2010). Structural Equation Modeling with AMOS: Basic Concepts, Applications, and Programming (2nd ed.). New York: Routledge Academic. Carmeli, A., & Gefen, D. (2005). The Relationship between Work Commitment Models and Employee Withdrawal Intentions. Journal of Managerial Psychology, 20(1/2), 63–86. doi:10.1108/02683940510579731 Christensen, T. E., Fogarty, T. J., & Wallace, W. A. (2002). The Association between the Directional Accuracy of Self-Efficacy and Accounting Course Performance. Issues in Accounting Education., 17(1), 1–26. doi:10.2308/iace.2002.17.1.1
DeShields, O. W. Jr, Kara, A., & Kaynak, E. (2005). Determinants of Business Student Satisfaction and Retention in Higher Education: Applying Herzberg’s Two-Factor Theory. International Journal of Educational Management, 19(2/3), 128–139. Dow, K. E., Wong, J., Jackson, C., & Leitch, R. A. (2008). A Comparison of Structural Equation Modeling Approaches: The Case of User Acceptance of Information Systems. Journal of Computer Information Systems, 48(4), 106–115. Harvey, P., Harris, K. J., & Martinko, M. J. (2008). The Mediated Influence of Hostile Attributional Style on Turnover Intentions. Journal of Business and Psychology, 22(4), 333–343. doi:10.1007/ s10869-008-9073-1 Loehlin, J. C. (2004). Latent Variable Models: An Introduction to Factor, Path, and Structural Equation Analysis (4th ed.). Mahwah, N.J.: L. Erlbaum Associates. Maruyama, G. M. (1998). Basics of Structural Equation Modeling. Thousand Oaks: Sage Publications. Montgomery, A. L., Li, S., Srinivasan, K., & Liechty, J. C. (2004). Modeling Online Browsing and Path Analysis Using Clickstream Data. Marketing Science, 23(4), 579–595. doi:10.1287/ mksc.1040.0073 Olobatuyi, M. E. (2006). A User’s Guide to Path Analysis. Lanham, MD: University Press of America. Plice, R. K., & Reinig, B. A. (2007). Aligning the Information Systems Curriculum with the Needs of Industry and Graduates. Journal of Computer Information Systems, 48(1), 22–30. Poon, J. M. L. (2003). Situational Antecedents and Outcomes of Organizational Politics Perceptions. Journal of Managerial Psychology, 18(1/2), 138–155. doi:10.1108/02683940310465036
79
An Introduction to Path Analysis Modeling Using LISREL
Surhone, L. M., Timpledon, M. T., & Marseken, S. F. (Eds.). (2010). Path Analysis (Statistics): Structural Equation Model, Analysis of Covariance, Latent Variable Model. Betascript Publishing. Xenikou, A., & Simosi, M. (2006). Organizational Culture and Transformational Leadership as Predictors of Business Unit Performance. Journal of Managerial Psychology, 21(6), 566–579. doi:10.1108/02683940610684409 Yousef, D. A. (2002). Job Satisfaction as a Mediator of the Relationship between Role Stressors and Organizational Commitment: A Study from an Arabic Cultural Perspective. Journal of Managerial Psychology, 17(4), 250–266. doi:10.1108/02683940210428074
KEY TERMS AND DEFINITIONS Coefficient of Correlation: A measure of the linear correlation of two variables. The linear correlation measures the strength of the relationship between two numerical variables. If they are quantitative data, the Pearson product-moment correlation coefficient, r, is used to measure the strength of the linear correlations. Quantitative data are often referred to as metric data, which include both ratio level data and interval level data. The value of r ranges from -1 to 0 to +1, representing a perfect negative correlation (r=1), no correlations (r=0), and a perfect positive correlation (r=+1). Covariance: A measure of joint variability for two variables. A positive (negative) covariance value indicates an increasing (decreasing) linear relationship between two quantitative variables. It can be denoted by: Cov(x,y) = SSxy/n-1 = S (xi - ¯x)(yi - ¯y)/ n-1, where n is the sample size. It is a necessary input to compute a coefficient of correlation between two quantitative data. The value of r is the covariance of two variables (x, y) divided by the product of standard deviation of
80
variable x and the standard deviation of variable y. Understanding the concept of variance and covariance is very important to better understand path analysis modeling, since a variance and covariance matrix is used to test the fit of the correlation matrix against the causal research models. Exogenous Variables: Comes from the Greek words “exo” and “gen”, which mean “outside” and “production”, referring to uncontrollable variables coming from outside of the model. Therefore values are given. Endogenous/dependent variables are the opposite of exogenous, whose values are dependent on exogenous/independent variables. User satisfaction may be dependent on system quality and other exogenous variables. A beginner of structural equation modeling may encounter so many different terms referring to dependent and independent variables. The independent variables are denoted as “x”. They are also known as exogenous, explanatory, predictor, or cause variables. The dependent variables are the variables to be predicted and denoted as “y”. They are also known as endogenous, response, or effect variables. Model Estimation: Estimates the parameters of theoretical models using different estimation procedures including maximum likelihood (ML), generalized least squares (GLS), unweighted least squares (ULS), etc. LISREL uses the ML procedures to generate structural equations and reduced form equations, covariance matrix of independent Variables, covariance matrix of latent variables, and goodness of fit statistics. Model Identification: Decides whether the model is identifiable. The identifiability of a path model can be determined by comparing the number of parameters to be estimated (unknowns) and the number of distinct values in the covariance matrix (knowns). If the number of known is equal to the number of unknown in the path model, it is called just identified. If the number of unknown is less than the number of known, it is the case of over-identified model. The last case, under identified model, occurs when the number of unknown is more than the number of known.
An Introduction to Path Analysis Modeling Using LISREL
The term determination is often used interchangeably with the term identification. Therefore, the terms justdetermination, underdetermination, and overdetermination are used with or without hyphens. It is critical for the path analysis modelers to be aware that always a path model should avoid building under-justified models. Model Specification: The first step of path analysis modeling. It is concerned with identifying all relevant variables in the research model and specifying the proper relationships among them. The identification of all relevant variables specifically refers to including only necessary variables and excluding extraneous variables. Needless to say, either including extraneous variables or excluding essential variables will lead to a path model that may not truly reflect the causal relationships among variables. Moreover, it can affect the path coefficients significantly. Therefore the model results can be unreliable. It is also important in this path modeling step to build a parsimonious model with a few substantive meaningful paths. Model Testing: Tests the fit of the correlation matrix against the theoretical causal model built by researchers based on the extant literature. The output file contains a wide range of useful information including multiple fit indices available to test the model. The goodness of fit statistics include an extensive array of fit indices. Choosing specific indices in the assessment of model fit is a complex task due to the fact that each index is designed to best suit specific path/structural equation models depending on the different sample size, estimation procedure, model complexity, violation of the underlying assumptions of multivariate normality, variable independence, etc. Path Coefficients: Standardized regression coefficients. The path coefficient, a.k.a. path weight, shows the strength of the effect of an independent variable on a dependent variable in the path model. The LISREL output section
provides us with two different path coefficients: structural equations and reduced form equations. The structural equations consist of all the equations including mediating variables. The reduced form equations show only the effects of exogenous (independent) variables on endogenous variables. The term, structural equations, refers to the simultaneous equations in a model. It is also known as multiequations. It refers to the equation that contains mediating variables. The mediating variable functions as the dependent (response) variable in one equation. At the same time, it functions as the independent variable (predictor) in another equation. Structural equations outputs show direct and indirect effects of all exogenous variables and mediating variables. Structural equations also illustrate the effects of endogenous variables on each other. The reduced form equation of a path model is the result of rearranging algebraically independent variables and dependent variables so that each dependent (endogenous) variable is on the left side of one equation, and only independent (exogenous) variables are on the right side. Reduced form equations provide us with the equations of the endogenous variables (left-hand side) in terms of the total causal effect of the two exogenous variables. Therefore, the effects of mediating variables on the effect variables are not shown. t- values: Refer to the numbers below the standard errors of the estimate. It is the ratio that can be obtained by dividing the path coefficients by the standard errors of estimates. The high t-value indicates that the path coefficient is nonzero (significant). In LISREL, the rule is to use a critical value of T. This T must not be confused with the student t distribution. In SEM, the t value that is typically used is t > 1.96. Traditionally, these critical values are called t values, but they use z critical values.
81
82
Chapter 5
Testing the DeLone-McLean Model of Information System Success in an E-Learning Context Sean B. Eom Southeast Missouri State University, USA James Stapleton Southeast Missouri State University, USA
ABSTRACT This chapter has two important objectives (a) introduction of structural equation modeling for a beginner; and (b) empirical testing of the validity of the information system (IS) success model of DeLone and McLean (the DM model) in an e-learning environment, using LISREL based structural equation modeling. The following section briefly describes the prior literature on course delivery technologies and e-learning success. The next section presents the research model tested and discussion of the survey instrument. The structural equation modeling process is fully discussed including specification, identification, estimation, testing, and modification of the model. The final section summarizes the test results. To build e-learning theories, those untested conceptual frameworks must be tested and refined. Nevertheless, there has been very little testing of these frameworks. This chapter is concerned with the testing of one such framework. There is abundant prior research that examines the relationships among information quality, system quality, system use, user satisfaction, and system outcomes. This is the first study that focuses on the testing of the DM model in an e-learning context. DOI: 10.4018/978-1-60960-615-2.ch005
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
Testing the DeLone-McLean Model of Information System Success
INTRODUCTION During the past decades, we have seen an increase in empirical research to identify the success factors of e-learning systems. The majority of e-learning research has focused on the two research streams (a) outcome comparison studies with classroombased learning; and (b) studies examining potential predictors of e-learning success or e-learning outcomes (Arbaugh et al., 2009). The quality of e-learning empirical research has also improved substantially during the past two decades. Many frameworks for e-learning education in business have been developed or adopted from other disciplines. To build e-learning theories, those untested conceptual frameworks must be tested and refined. Nevertheless, there has been very little testing of these frameworks. This chapter is concerned with the testing of one such framework: the information system (IS) success model of DeLone and McLean (the DM model). This chapter has two important objectives (a) introduction of structural equation modeling for a beginner; and (b) empirically testing the validity of the DM model in an e-learning environment. The primary focus of this book is placed on providing an introduction to e-learning empirical research. To that end, several chapters in the book include tutorials on structural equation modeling techniques such as path analysis and structural equation modeling using partial least squares. This chapter complements those chapters to provide a basic introduction to structural equation modeling. The second objective of this chapter is to empirically test the validity of the DM model in an e-learning environment. The DM model is one of the widely recognized IS models based on a systematic review of 180 studies with over 100 measures. The DM model has been empirically tested using structural equation modeling in a quasi-voluntary IS use context (Rai, Lang, & Welker, 2002) and in a mandatory information system context (Livari, 2005). The study of Rai et al. concluded that the DM model
has explanatory power, and therefore, the model has merit for explaining IS success. The study of Livari concluded that perceived system quality and information quality are significant predictors of user satisfaction. However, his study failed to support the positive association between system use and user satisfaction. Our study is the first empirical testing of the DM model in e-learning context. E-learning systems and information systems share some common dependent variables. Nevertheless, the two systems differ in terms of the output of the systems (independent variables). The rest of this chapter is organized into several sections. The following section briefly describes the prior literature on course delivery technologies and e-learning success. The course delivery technologies are part of a comprehensive array of dependent variables that affect the success of e-learning systems. The next section presents the research model to be tested. The survey instrument is discussed in the next section. Structural equation modeling (SEM) methodology is fully discussed in the following sections, including model specification, model identification, model estimation, and model testing and modification. The final section summarizes the test results.
COURSE DELIVERY TECHNOLOGIES AND E-LEARNING SUCCESS The review of the past two decades of e-learning systems research identified three dimensions: human (students, instructors, and interaction among them), design (course contents, course structures, etc.), and e-learning systems including technologies. In each dimension, researchers identified many indicator variables. For example, students can be further sub-classified into sub-dimensions such as learning styles, intelligence, self-efficacy, motivation, self-regulated learning behaviors, etc. For the review of the impact of human dimensions and design dimensions on e-learning success,
83
Testing the DeLone-McLean Model of Information System Success
Figure 1. Information systems success model. Source: DeLone and McLean 1992, P.87
readers are referred to (Arbaugh et al., 2009). The technological dimension of e-learning success factors includes many information systems tools such as Web 2.0 technologies, push technologies, blogs, and wikis, to name a few. Readers are referred to (Arbaugh et al., 2009) to review the various empirical studies to examine the impact of these tools on e-learning success.
The DeLone and McLean Model The research model we tested is the model of information systems (IS) success created by DeLone and McLean (1992). Based on the review of 180 empirical studies, DeLone and McLean presented a more integrated view of the concept of information systems success and formulated a more comprehensive model of information systems success (Figure 1). This was based on six major categories of success measures, which are interrelated and interdependent. They are system quality, information quality, use, user satisfaction, individual impact, and organizational impact. Seddon presented and justified a re-specified and slightly extended version of the DeLone and McLean model because the inclusion of both variance and process interpretations in their model could be confusing. Seddon demonstrated that the term, “IS use” in the DM model can be interpreted in three different ways—as a variable that proxies for the benefits from use, as the dependent variable,
84
and as an event in a process leading to individual or organizational impact. The extended model clarified the meaning of IS use and introduced four new variables (expectations, consequences, perceived usefulness, and net benefits to society) and reassembled the links among the variables.
The Updated Model of Information Systems Success The original IS success model of DeLone and McLean (1992) was updated by adding one more construct of service quality, and combining individual and organizational impacts into a single construct of net benefits (DeLone & McLean, 2003). Addition of the service quality construct is justified due to the changing role of information systems organization as having the dual role of information provider and service provider. The role of information provider is to produce information for the end-users (entire organization and management). The role of service provider is offering support for end-user developers. To correctly understand the significance of this construct in the IS success model, we need to understand the dual role of the end-users as the user of the system and the developer of the systems. The sample SERVQUAL instrumental items as indicators of service quality constructor include the following dimensions:
Testing the DeLone-McLean Model of Information System Success
Figure 2. The e-learning success model and sample metrics of Holsapple and Lee-Post. Source: Holsapple and Lee-Post, 2006. p.71
• •
•
•
•
Tangibility – Information systems has upto-date hardware and software. Reliability – Information systems depart should be able to perform service dependably and accurately. Responsiveness – Information systems department staff should be able to give prompt service to the users. Assurance – Information systems department staff should be able to get their job done well with confidence and trust. Empathy – Information systems department staff have users’ best interests to give individualized service with caring attitude.
SERVQUAL is a multi-item scale developed to assess customer perceptions of service quality in service and retail businesses. Kettinger and Lee (1994) were among the early adapters of SERVQUAL to the IS context. The updated model
of IS success is later adapted to the e-learning context (Holsapple & Lee-Post, 2006).
The E-Learning Success Model of Holsapple and Lee-Post As introduced in the review, a large volume of empirical studies have been conducted to investigate the success factors of e-learning systems. Holsapple and Lee-Post attempted to formulate a holistic and comprehensive model for assessing e-learning initiatives. They developed a model to guide the design, development, and delivery of successful e-learning initiatives. The E-Learning Success Model of Holsapple and Lee-Post (Figure 2) uses the process approach to measuring and assessing success. The model includes success metrics developed specifically for the e-learning context. In this process approach, the overall success of e-learning initiatives depends on the
85
Testing the DeLone-McLean Model of Information System Success
attainment of success at each of the three stages of e-learning systems development: design, delivery, and outcome analysis. Their model shows that system design, system delivery, and systems outcomes are interdependent as shown by the unidirectional, single headed arrow and bi-directional, double headed arrows. Using the action research approach, they tested the model through the development and implementation of an online version of an undergraduate quantitative methods course. Holsapple and Lee-Post further suggested validating their success model using empirical studies.
SURVEY INSTRUMENT Wang, Wang, and Shee (2007) first explored whether traditional IS success models can be extended to investigate e-learning systems’ success in organizational contexts. Based on DeLone and McLean’s (2003) updated IS success model, Wang, et al. developed and validated a multi-dimensional model for assessing e-learning systems’ success (ELSS) from the perspective of the employee (elearner). The study conceptualized the construct of e-learning systems’ success, provided an empirical validation of the construct and its underlying dimensionality, and developed a standardized instrument with desirable psychometric properties for measuring e-learning systems success. Using the proposed ELSS instrument, the current study attempts to extend the use of DeLone and McLean’s (2003) model to university-based e-learning environments. The survey instrument consisted of 35 items using a seven point Likert scale ranging from “strongly disagree” to “strongly agree.” In addition, respondents were asked six demographic-type questions. The population was undergraduate and graduate students that were enrolled in an online course at a large university located in the Midwest United States. Invitations to reply to the survey were administered online at the time of log-in to 2,156
86
unique students. Of those students invited, 809 students volunteered responses with 674 surveys being complete and usable for a response rate of 31.3%.
SAMPLE SIZE IN STRUCTURAL EQUATION MODELING Sample size is an important consideration to ensure that the purpose of the study can be fully accomplished. The number of parameters is an important input to decide the size of a sample that can assess the significance of testing model effects. Kline (1998) suggested the following. • • •
Ideal sample size= the number of parameters * 20 or more Adequate sample size= the number of parameters * 10 or more Insufficient sample size = the number of parameters * 5 or less
The type of parameters in path analysis and structural equation models includes the following: • • • •
Path coefficients Equation error variances Correlation among the independent variables Independent variable variances
The number of each type of parameter can be counted from the specified path model. The model identification section shows an example of how to count the number of the different types of parameter. According to Schumacker and Lomax (2010, p.211), “In traditional multivariate statistics, the rule of thumb is 20 subjects per variable (20:1). The rule of thumb used in structural equation modeling vary from 100, 200, to 500 or more subjects per study, depending on model complexity and cross-validation requirements.”
Testing the DeLone-McLean Model of Information System Success
Hair et al. (Hair, Black, Babin, & Anderson, 2010) suggested the following: • • •
Minimum sample size of 100 with the model of five or fewer constructs. Minimum sample size of 150 - 300 with the model of seven or fewer constructs. Minimum sample size of 500 with the model of large number of constructs.
By any measure, our sample size of 674 completed responses with no missing data is considered to be more than adequate.
SEM METHODOLOGY All structural equation models (regression, path, confirmatory factor, and structural equation models) follow a series of steps: specification, identification, estimation, testing, and modification of model (Schumacker & Lomax, 2010).
Model Specification Model specification is the first step of identifying all of the relevant variables in the model and examining the proper relationships among the identified variables. Typically, before this step of model specification, a complete survey of the literature on the specific research area must be conducted to identify major issues for further investigation. The identification of all relevant variables specifically refers to including only necessary variables and excluding extraneous variables. Figure 3 shows the research model with common path diagram symbols. Figure 4 is the LISREL produced Basic Conceptual Diagram of the DeLone-McLean Model of Information System Success. An observed variable is enclosed by either a rectangle or a square. A measurement error in an observed variable is denoted by an oval shape with an attached unidirectional arrow pointing to that observed variable. Single head
arrows indicate unidirectional (recursive) paths. A reciprocal (non-recursive) path between variables can be drawn by using two arrows with different directions. Double headed or curved arrows denote correlation between variables. Each observed variable is contacted by two pointing arrowheads. For example, the first observed variable (q1), measured by questionnaire item 1, has two arrows at left and right. The left-hand side arrow indicates a measurement error, and the right hand side arrow indicates that q1 is an indicator that is associated with the system quality construct. The models in Figures 3 and 4 examine the relationships among five constructs. The two independent constructs are system quality and information quality. The two mediating constructs are system use and user satisfaction. The dependent construct is e-learning outcomes.
System Quality and Information Quality The IS success model (DeLone & McLean, 1992; DeLone & McLean, 2003) and the e-learning success model (Holsapple & Lee-Post, 2006) posit that the success of IS and e-learning systems is dependent on the intervening variables (user satisfaction and system use), which are in turn dependent on the quality of information, system, and service. Technology acceptance model (TAM) developed in the IS area has emerged as a useful model for explaining e-learning system usage and satisfaction (Landry, Griffeth, & Hartman, 2006). The TAM defines the relationships between systems use (dependent constructs) and perceived usefulness and perceived ease of use (two independent constructs). Therefore, the TAM theorizes that system use is determined by perceived usefulness and perceived ease of use. The TAM model has been extended by many other researchers. The unified theory of acceptance and use of technology (UTAUT) is an extension of the TAM model. The TAM postulates that perceived usefulness and ease of use determine an individual’s intention to use a
87
Testing the DeLone-McLean Model of Information System Success
Figure 3. Research model
Figure 4. LISREL produced basic conceptual diagram of the DeLone-McLean model of information system success
88
Testing the DeLone-McLean Model of Information System Success
system, which in turn, determines actual system use. The theory posits that the four key constructs directly determine usage intention and behavior (Venkatesh, Morris, Davis, & Davis, 2003). Moreover, gender, age, experience, and voluntariness of use are posited to mediate the impact of the four key constructs on usage intention and behavior (Venkatesh & Davis, 2000; Venkatesh & Morris, 2000; Venkatesh, Morris, Davis, & Davis, 2003). Arbaugh(2005) found that perceived usefulness and ease of use of Blackboard significantly predicted student satisfaction with the Internet as an educational delivery medium.
System Use System use has been considered as a factor that influences the system success in the past decades and has been used by a number of researchers. (DeLone & McLean, 1992; DeLone & McLean, 2003; Holsapple & Lee-Post, 2006; Rai, Lang, & Welker, 2002). Consequently, we hypothesize that system use is a variable that will be positively related to e-learning systems success and e-learner satisfaction.
User Satisfaction and E-Learning Outcomes There is abundant prior research that examines the relationships between user satisfaction and individual impact (Doll & Torkzadeh, 1988; Livari, 2005; Rai, Lang, & Welker, 2002). A study of Eom et al. (2006) examined the determinants of students’ satisfaction and their perceived learning outcomes in the context of university online courses. Their study found that user satisfaction is a significant predictor of learning outcomes.
the structural equation model can be determined by comparing the number of parameters to be estimated (unknowns) and the number of distinct values in the covariance matrix (knowns). If the number of knowns is equal to the number of unknowns in the path model, it is called “just identified”. If the number of unknowns is less than the number of knowns, it is the case of an “over identified” model. The last case, “under identified” model, occurs when the number of unknowns is more than the number of knowns. The term determination is often used interchangeably with the term identification. Therefore, the terms just determination, under determination, and over determination are used with or without hyphens. It is critical to avoid building the under justified model. If a path model is under identified, a unique path solution cannot be computed. A model is just identified if the number of parameters is equal to the number of distinct values in the covariance matrix. The output will indicate that path analyses are saturated models and therefore chi squares = 0 and degrees of freedom =0. In our model, the number of distinct values in the covariance matrix is 153(17*18/2). The number of free parameters to be estimated is as follows: •
•
Identification The next step, after the specification of the structural equation model, is to decide whether the model is identifiable. The identifiability of
•
Factor loadings –14 (Figure 4 shows 17 observed variables and their factor loadings. Factor loadings in SEM outputs are standardized solution. To produce standardized factor loadings that use the same scale of measurement, some parameters (q11, q15, and q17) must be fixed.) Measurement Error variances –17 (Figure 4 contains 17 observed variables. Each of them contains a measurement error indicated by an arrow. Unlike regression or path models, SEM consider this measurement errors of observed variables.) Measurement error covariances – 0 (By definition, measurement errors are not to be correlated.)
89
Testing the DeLone-McLean Model of Information System Success
• • •
•
•
Latent independent constructs variances -- 0 Latent independent constructs covariances -- 1 Structure coefficients --7 (There are 7 arrows that define the relationships among structural (latent) variables in Figure 4.) Equation prediction error variances – 3 (Each of mediating latent variables (system use and user satisfaction), and dependent variable (systems outcome) equations have a prediction error variable.) Equation prediction error covariances – 0
The number of distinct values in the covariance matrix is equal to p(p+1)/2= [17(17+1)]/2=9*17=153, where p is the number of observed variables. The number of free parameters is 42. Therefore, the degrees of freedom (df) in the model is computed as [p(p+1)/2] - the number of free parameters. df = 153 – 42 = 111. This model is an over identified model because the total number of distinct values in the covariance matrix (153) is greater than the number of free parameters to be estimated (42).
Estimation Model identification is followed by model estimation. This step estimates the parameters of theoretical models. Figure 4 shows that the first user-interface displays of LISREL 8.2. It has only three submenus – file, view, and help. The LISREL command file must be created first by clicking the file menu. Figure 5 also shows dropdown menus of several choices. By clicking file, you will begin to build a SIMPLIS (SIMPLe LISrel) file. LISREL 8 allows the user to use two different command languages (LISREL command and SIMPLIS command) to build an input file. The SIMPLIS command language is easy to use and learn and it becomes available with LISREL version 8 (Jöreskog & Sörbom, 1993). The eight sets of hypotheses were tested using LISREL 8.70. The research model consists of two independent 90
constructs (system quality and information quality), two mediating constructs (systems use and user satisfaction), and one dependent construct (system outcomes). Each construct is defined by using several indicator variables. Table 2 shows latent variables and their associated indicator variables.
The SIMPLIS Command Language Syntax The SIMPLIS command language syntax consists of the following. • • • • • • • •
Title Variables Observed variables Raw data or covariance matrix or correlations matrix Sample size Latent variables Relationships End of problem
The title is optional. It can be single or multiple lines. Variables are used to define acronyms to be used later in the observed variables section. Data for LISREL 8 can be entered using one of the following types; raw data, covariance matrix, and correlation matrix. Raw data can be placed as part of the input file. Figure 5 shows only the first and the last part of the data. The rest of the data is omitted. It is recommended that the raw data be stored in an external file, using the following command line. Raw data from File filename The next section specifies the sample size. A total of 674 valid unduplicated responses were used to fit the path analysis model. This can be done with many different formats. • • •
Sample size 674 Sample size = 674 Sample size: 674
Testing the DeLone-McLean Model of Information System Success
Figure 5. The IS success model
The next section defines the relationships. The first section defines the relationship between observed (indicator) variables and latent variables. For example, system quality comprised of 4 indicator variables (q1, q2, q4 and q5). • •
q1 q2 q4 q5 = sysq or q1 – q5 = sysq
This is followed by defining the relationships between latent variables. This can be done by using two different formats of either paths or relationships. The path model in Figure 5 can be specified using one of the two formats below.
Paths sysq -> q1 q2 q4 q5 infq -> q6 q7 q8 q9 q10 sysuse -> q11 q12 usersat -> q15 q16 sysout -> q17 q18 q19 q20 sysq -> sysuse usersat infq -> sysuse usersat usersat -> sysout sysuse -> sysout usersat Relationships q1-q5 = sysq q6-q10 = infq q11 q12 = sysuse q15 q16 = usersat
91
Testing the DeLone-McLean Model of Information System Success
q17-q20 = sysout sysuse = sysq infq usersat = sysq infq sysuse sysout = sysuse usersat The command file is saved with SPL extension. LISREL has two types of user interfaces – the drop down menu and the graphical user interfaces (GUI). The GUI interfaces has 12 buttons. To run this command language file, the user clicks the fifth button with “L” from left.
Estimation and Validation The LISREL Output File The output file contains the following information. • • • • • • •
Covariance matrix LISREL Estimates using maximum likelihood Structural equations Reduced form equations Covariance matrix of independent variables Covariance matrix of latent variables Goodness of fit statistics
Prior to examining LISREL estimates, it makes sense to examine the goodness of fit statistics first. If the fit statistics do not indicate good fit, the researchers may modify the previous model. In this example, since we are testing a well-accepted model, it is not necessary to modify it. Comparing our goodness of fit statistics in appendix and suggested acceptable threshold values, some indices (RMSEA, SRMR, NFI, GFI, CFI, and NNFI) values are clearly above the threshold values, while only one (relative χ2/ df) is below the suggested values. AGFI just pass the minimum criterion.
92
Measurement (Outer) Model and Assessing Its Validity Due to the fact that the latent variable con be only indirectly measureable by using more than one observable variable, the issue of validity in SEM is an important matter. Validity is concerned with the extent to which each observed variable accurately define the construct. Each measurement item on a survey instrument is assumed to reflect only one latent variable and each item is related to one construct better than to any others (Gefen & Straub, 2005). This property (uni-dimensionality) of the construct must be confirmed. There are two elements of construct (factorial) validity: convergent validity and discriminant validity (see Table 1). Convergent validity is defined as the extent to which indicators of a latent variable converge or share high proportions of variance in common (Hair, Black, Babin, & Anderson, 2010). It is established when all indicator (observed) variables load highly on their assigned factors, .5 or higher. Ideally, they should be .7 or higher. Additionally, each of the measurement items load with an acceptable t-value on its latent construct. The acceptable t-value is when p-value is at the .05 α level or less. Unlike PLS-Graph output, LISREL output does not generate factor validity information such as composite reliability and AVE (Average variance Extracted). However, factor loadings output (standardized solution of basic model) can be used to produce such information (see Figure 6 and Figure 7). The formula of AVE is: ∑λi2 /n where λi is the factor loading of each observed variable on its corresponding construct and n is the number of observed variables on each construct. Composite reliability or construct reliability is an indicator of convergent validity. It is computed from the following formula. Composite reliability = (∑λi)2/ [(∑λi)2 + ∑ei] where λi is the factor loading of each observed variable on its corresponding construct and ei is the error variance term for each
Testing the DeLone-McLean Model of Information System Success
Table 1. Convergent and discriminant validity and reliability of the measurement model Constructs and Associated Variables
Measurement Items
Factor Loading
System Quality CR = 0.8723 AVE = 0.6320 Q1
The system is always available
0.69
Q2
The system is user-friendly
0.85
Q4
The system has attractive feature that appeal to user
0.84
Q5
The system provides high-speed information access
0.79
Q6
The system provides info. that is exactly what you need
0.90
Q7
The system provides info. that is relevant to learning
0.89
Q8
The system provides sufficient information
0.92
Q9
The system provides information that is easy to understand
0.89
Q10
The system provides up-to-date information
0.82
Q11
I frequently use the system
0.91
Q12
I depend upon the system
0.80
Q15
I think the system is very helpful
0.89
Q16
Overall, I am satisfied with the system
0.95
Q17
The system has a positive impact on my learning
0.90
Q18
Overall, the performance of the system is good
0.95
Q19
Overall, the system is successful
0.95
Q20
The system is an important and valuable aid to me in the performance of my class work
0.83
Information Quality CR = 0.9473 AVE = 0.7826
System Use CR = 0.8461 AVE = 0.7341
User Satisfaction CR = 0.9173 AVE = 0.8473
System Outcomes CR = 0.9498 AVE = 0.8260
CR is composite reliability; AVE is average variance extracted.
observed variable on its corresponding construct. The error variance term is obtained by 1- (λi)2. Discriminant validity can be established when variables do not cross-load on two or more constructs, each construct is said to be demonstrating discriminant validity. In other words, each observed variables load highly on its theoretically assigned construct and not highly on other con-
structs. Discriminant validity was assessed using two methods. First, item loadings and the crossloadings of the constructs and the measures were examined. If items are loading together and items load highly (loading >0.50) on their associated factors, they are demonstrating discriminate validity. Individual reflective measures are considered
93
Testing the DeLone-McLean Model of Information System Success
Table 2. Computing AVE and composite reliability for the system quality construct Variables
Factor Loadings (FL)
q1
squred (FL) 0.69
0.4761
q2
0.85
0.7225
q4
0.84
0.7056
q5
0.79
0.6241
Total variance
2.5283
AVE Variables
0.632075 Factor Loadings
q1
square(FL) 0.69
1-Square(FL) 0.4761
0.5239
q2
0.85
0.7225
0.2775
q4
0.84
0.7056
0.2944
q5
0.79
0.6241
0.3759
Sum
3.17
Squared Sum
1.4717
10.0489 CR
=[B16/(B16+D15)] 0.872254917
to be reliable if they correlate more than 0.7 with the construct they intend to measure. The second method of establishing discriminant validity is comparing the square root of average variance extracted (AVE) for each construct with the correlations among constructs. If the square root of each AVE is much larger than any correlation among any pair of latent variables, and it should be greater than .50 (Chin, 1998; Fornell & Larcker, 1981; Gefen & Straub, 2005), then the validity of the measurement model is established. The formula of AVE is: ∑λi2 /n where λi is the factor loading of each observed variable on its corresponding construct and n is the number of observed variables on each construct. The AVE test is to see that the correlation of the construct with its measurement items should be larger than its correlation with other constructs, (Gefen & Straub, 2005). Table 2 shows that the square root of each AVE is much larger than any correlation among any pair of latent variables and it is much greater than minimum threshold value of .50. Overall, the measurement model results provided
94
strong support for the factorial, convergent, and discriminant validities and reliability of the measures used in the study.
Structural (Inner) Model Results LISREL and other covariance structure analysis modelling approaches involve parameter estimation procedures. Goodness-of-fit statistics, discussed earlier, is a global fit measure to provide you with the overall measures of fit between the sample covariance matrix and the reproduced model implied covariance matrix. The global fit measures do not specifically provide any information in regard to the statistical significance of each parameter estimate for the paths in the model. This is the second criterion to evaluate model fit.
Structural Equations The term, structural equations, refers to the simultaneous equations in a model. It is also known as multiequations. It refers to the equations that
Testing the DeLone-McLean Model of Information System Success
Figure 6. Standardized solution of basic model
Figure 7. The measurement items and t-values on its latent constructs
95
Testing the DeLone-McLean Model of Information System Success
Figure 8. Structural equations
contain mediating variables. The mediating variable functions as the dependent (response) variable in one equation. At the same time, it functions as the independent variable (predictor) in another equation. Structural equations outputs show direct and indirect effects of all exogenous and mediating variables. Structural equations also illustrate the effects of endogenous variables on each other. System quality, information quality, user satisfaction have no effect on the use of e-learning systems. User satisfaction is positively influenced by system quality, information quality, and system use. The perceived system outcomes are positively influenced by user satisfaction and system use.
Path Coefficients To interpret the output from LISREL, a path coefficient/path weight must be understood. A path coefficient is the standardized partial regression coefficient for an independent variable. In Figure 8, the estimated path coefficients are in front of the asterisk (*) before each variable. To demonstrate how to interpret the LISREL out, we will use the first line of LISREL outputs. sysuse=0.170*sysq+0.413*infq, Errorvar.=0.672,R²=0.328 (0.127) (0.125) (0.0616) 1.337 3.302 10.909
The e-learning system use (sysuse) is expected to increase by 0.17 on the average if the
96
perception of students in regard to the quality of e-learning systems (sysq) measured by “the system is user-friendly” increases one unit in Likert 7 point scale when other variables (infq) remained fixed.
Standard Errors of the Estimate The number enclosed by the parenthesis below the path coefficients is the standard error of the estimate. It is the standard deviation of residuals (error) for the regression model. The residual analysis tests whether the regression line is a good fit of the data (Black, 2008). The residual of the regression/path models are defined as the difference between the y value and the predicted value, ŷ. The sum of all the residuals is always zero. Therefore, to measure the variability of y, the next step is to compute the standard error of the estimate (the standard deviation of residuals/ error for the regression model) to find the sum of squares of error (residual) (SSE). The final step to find the standard error of the estimate (Se) is to divide SSE by the degrees of freedom of errors for the model and take the square root of the value from the previous step. Residual = y-ŷ SSE = ∑ (y-ŷ)2
Testing the DeLone-McLean Model of Information System Success
Se =
sse = n −k −1
∑ (y −
EQ \ O(y,^))2
n −k −1
where n = number of observations↜k = number of independent variables
t – Values and Their Interpretation The numbers below the standard errors of the estimate are the t-values. It is the ratio that can be obtained by dividing the path coefficients by standard errors of estimates. In Figure 8, the first t-value (1.337) is the ratio between the estimate (path coefficient) and its standard error of the estimate (.17/.127=1.338). This ratio (1.338) is not the same as 1.337 in Figure 8, because there are rounding errors. The high t-value indicates that the path coefficient is non-zero (significant). What is the threshold value of t? In structural equation modeling, the rule is to use a critical value. AMOS uses critical ratio (C.R.) values (Byrne, 2010). LISREL uses a t value. However, this t must not be confused with the student t distribution. In inferential statistics, when population standard deviation (σ) is unknown and the population is normally distributed, the estimation of the population mean can be found by using the student t distribution which was developed by William S. Gosset. In SEM, including path analysis modeling, the t value that is typically used is t > 1.96. However, the critical t values depends on the nature of the test (one-tailed vs. two-tailed hypothesis testing). Traditionally these critical values are called t values, but they use z critical values. This is just a historical artifact in the field (Eom, 2004). One-tailed vs. two-tailed hypothesis testing: Depending on the nature of the test, the interpretation of t-values will be different. One tailed tests are directional, meaning that the outcome should occur only in one direction, either positive or negative. The alternative hypothesis (Ha) is used either the greater than (>) or the less than
(<) sign. Two tailed tests are non-directional. The two-tailed tests are testing whether the parameters are zero or non-zero. For example, a hypothesis test for the regression coefficient of each independent variable is to determine whether it is zero or not. Therefore, the null hypothesis is: ß1 = 0. The alternative hypothesis is: ß1 ≠ 0. The same hypothesis is developed and tested for all independent variables. When the alternative hypotheses contain ≠, this is the case of two tailed tests. With the specified type 1 error rate (α) = .05, the rejection region is in the two ends of the standard normal distribution area (2.5% in each area). This value of 1.96 indicates that the type 1 error rate, or alpha (α), is 5% in the two ends of the distribution curve. Therefore, the critical z value is: zα/2 = ± 1.96. This z = 1.96 covers 95% of the areas of the standard normal distribution. Using the z value of 1.96, all the path coefficients in the model are significant, with a probability of 5% of making type 1 error (rejecting a true null hypothesis) (see Table 3).
Coefficient of Multiple Determination (R2) This is a measure of fit for the regression model. R2 is “the proportion of variation of the dependent variable (y), accounted for by the independent variables in the regression models.” (Black, 2008). The value of R2 ranges from 0 to 1. R2 = regression sum of squares (SSR) / total sum of squares (SST) Total sum of squares (SST) = explained sum of squares (SSR) + unexplained sum of squares (SSE) As Table 4 and Figure 9 show, all hypothesized relationships among 6 latent variables are strongly supported except one (effects of systems quality on systems use), which was supported at p<. 10.
97
Testing the DeLone-McLean Model of Information System Success
Table 3. Significance level and corresponding t-values Significance Level (p-value) 0.001
Symbol ****
1-tail test 3.090
2-tail test 3.290
0.010
***
2.326
2.576
0.050
**
1.645
1.960
0.100
*
1.282
1.645
Not Significant
Ns
-
-
Table 4. Structural (inner) model results Path Coefficients
Obs. T-Values
Sig Level
Effects on System Use (R2 = .328) €€€€€€€€€€System quality
+.17
+1.337
*
€€€€€€€€€€Info. Quality
+.413
+3.302
****
€€€€€€€€€€User Satisfaction
+.29
+2.616
***
€€€€€€€€€€System quality
+.53
+7.2
****
€€€€€€€€€€Info. Quality
+.36
+3.98
****
+.07
+2.618
***
€€€€€€€€€€System quality
+.05
+2.635
***
€€€€€€€€€€User Satisfaction
+.938
+30.66
****
Effects on User Satisfaction (R = .834)
2
€€€€€€€€€€User Satisfaction Effects on System Outcomes (R = .961)
2
**** p < .001, *** p <.010, ** p < .050, * p < .100
The results show that the structural model explains 32.8 percent of the variance in the system use construct, 83.4 percent of the variance in the user satisfaction construct, and 96.1 percent of the variance in the systems outcomes construct.
Comparison of the Results with the Study of Rai et al. (2002) One critical factor that influences the determinants of the success factors of information systems is the type of information systems use. IS use can be volitional, mandatory, or quasi-volitional. Rai et al.’s study sample represents quasi-volitional IT use because the users of a computerized student
98
information system (SIS), at a mid-western US university, find it very useful for them to access to information by SIS to perform many university tasks, but the university does not mandate SIS use. On the other hand, e-learning systems are mandatory systems.
Structural Model Results Impact of systems’ quality on user satisfaction: Both studies concluded that e-learning system quality will lead to a higher level of user satisfaction with the significance level at p< .001 (our study) and at p < .01 (Rai et al.’s study).
Testing the DeLone-McLean Model of Information System Success
Figure 9. LISREL structural model results for the DeLone and McLean model testing
Impact of information quality on user satisfaction and system use: Both studies concluded that information quality will lead to a higher level of user satisfaction and system use with the significance level at p< .001 (our study) and at p < .01 (Rai et al.’s study). Impact of both system use and user satisfaction on systems outcomes: Both studies concluded that both system use and user satisfaction will lead to a higher level of systems outcomes with the similar significance level. Relationships between system use and user satisfaction: Our study supports that user satisfaction will lead to a higher level of systems use and vice versa. However, the study of Rai et al. tested only the positive role of user satisfaction on system use (see Table 5).
Comparison of Goodness of Fit Statistics As can be seen from Table 6, our study results are strongly supported by the goodness of fit statistics.
CONCLUSION A primary contribution of our study is that we examined the IS success model in an e-learning context, which is a strictly involuntary use setting. There are numerous empirical tests that examined partial relationships among five constructs: information quality, system quality, system use, user satisfaction, system outcomes. However, this is the first study that empirically tested the DM model in an e-learning context using structural equation modeling. Our future research will ex-
99
Testing the DeLone-McLean Model of Information System Success
Table 5. Structural model results (direct and indirect effects) Hypothesis
Total Effect
Direct Effect
H1: e-learning system quality will lead to a higher level of system use.
0.17*
0.17*
H2: e-learning system quality will lead to a higher level of user satisfaction.
.541****
.528***
H3: Information quality will lead to a higher level of system use.
.41****
.41****
H4: Information quality will lead to a higher level of user satisfaction.
.39****
.359****
H5: System use will lead to a higher level of user satisfaction.
.074***
.074***
H6: User satisfaction will lead to a higher level of system use.
.29***
.29***
H7: System use will lead to a higher level of system outcomes.
.05***
.05***
H8: User satisfaction will lead to higher levels of student agreement that the learning outcomes of online course are equal to or better than in face-to-face courses.
.95****
.95****
Indirect Effect .13 .031
p-values: **** <0.001, *** <0.010, ** <0.005, * <0.100
Table 6. Comparison of goodness of fit statistics Statistics
Rai et al.’s study
χ
Our study 303.89
2
Threshhold Levels
472.838
DF
113
111
relative χ2
2.69
4.26
between 2 and 3
RMSEA
0.079
0.07
< .08
0.87
0.92
>.9
0.83
0.90
>.9
GFI AGFI * Relative χ is computed as χ /degrees of freedom. 2
2
amine the implications of the results of this study. E-learning systems and other information systems differ in many aspects. One critical dimension that distinguishes e-learning systems from other information systems is system outcomes. Typical information system presented and justified a re-specified and slightly extended version of the DeLone and McLean model processes data to generate information for the users who may use it to manage individual tasks in planning, organizing, controlling, coordinating/collaborating, etc. In e-learning, the students do not simply use information as typical users of other information systems do. The students need to be motivated to complete their assignments and to initiate the self-regulated learning process management. They also need to be encouraged to interact with
100
other students and the instructor. The instructor needs to provide on-going timely feedback to the students. All these activities must be orchestrated by e-learning systems. In this respect, the role of e-learning systems as the information producer is very limited. E-learning systems must perform multiple roles above and beyond the typical role of information generator.
REFERENCES Arbaugh, J. B. (2005). Is there an optimal design for online MBA courses? Academy of Management Learning & Education, 4, 135–149.
Testing the DeLone-McLean Model of Information System Success
Arbaugh, J. B., Godfrey, M. R., Johnson, M., Pollack, B. L., Niendorf, B., & Wresch, W. (2009). Research in online and blended learning in the business disciplines: Key findings and possible future directions. The Internet and Higher Education, 12, 71–87. doi:10.1016/j.iheduc.2009.06.006
Crowley, S. L., & Fan, X. (1997). Structural equation modeling: Basic concepts and applications in personality assessment research. Journal of Personality Assessment, 68(3), 508–531. doi:10.1207/ s15327752jpa6803_4
Bentler, P. M. (1990). Comparative fit indices in structural models. Psychological Bulletin, 107, 238–246. doi:10.1037/0033-2909.107.2.238
DeLone, W. H., & McLean, E. R. (1992). Information System success: The quest for the dependent variable. Information Systems Research, 3(1), 60–95. doi:10.1287/isre.3.1.60
Bentler, P. M., & Bonnet, D. C. (1980). Significance tests and goodness of fit in the analysis of covariance structures. Psychological Bulletin, 88(3), 588–606. doi:10.1037/0033-2909.88.3.588
DeLone, W. H., & McLean, E. R. (2003). The DeLone and McLean model of Information Systems success: A ten-year update. Journal of Management Information Systems, 194(4), 9–30.
Black, K. (2008). Business statistics for contemporary decision making (5th ed.). Hoboken, NJ: Wiley.
Diamantopoulos, A., & Siguaw, J. A. (2000). Introducing LISREL. London, UK: Sage Publications.
Bollen, K. A. (1989). A new incremental fit index for general structural models. Sociological Methods & Research, 17, 303–316. doi:10.1177/0049124189017003004 Boomsma, A. (2000). Reporting analyses of covariance structures. Structural Equation Modeling, 7(3), 461–483. doi:10.1207/ S15328007SEM0703_6 Byrne, B. M. (1998). Structural equation modeling with LISREL, PRELIS, and SIMPLIS: Basic concepts, applications, and programming. Mahwah, NJ: Lawrence Erlbaum. Byrne, B. M. (2010). Structural equation modeling with AMOS: Basic concepts, applications, and programming (2nd ed.). New York, NY: Routledge Academic. Chin, W. W. (1998). The partial least squares approach to structural equation modeling. In Marcoulides, G. A. (Ed.), Modern methods for business research (pp. 295–336). Mahwah, NJ: Lawrence Erlbaum Associates.
Doll, W. J., & Torkzadeh, G. (1988). The measurement of end user computing satisfaction. Management Information Systems Quarterly, 12(2), 259–274. doi:10.2307/248851 Eom, S. B. (2004). Personal communication with Richard Lomax in regard to the use of T value in structural equation modeling through e-mail Eom, S. B., Ashill, N., & Wen, H. J. (2006). The determinants of students’ perceived learning outcome and satisfaction in university online education: An empirical investigation. Decision Sciences Journal of Innovative Education, 4(2), 215–236. doi:10.1111/j.1540-4609.2006.00114.x Fornell, C. R., & Larcker, D. (1981). Evaluating structural equation models with unobservable variables and measurement error. JMR, Journal of Marketing Research, 18(1), 39–50. doi:10.2307/3151312 Gefen, D., & Straub, D. (2005). A practical guide to factorial validity using PLS-Graph: Tutorial and annotated example. Communications of the Association for Information Systems, 16, 91–109.
101
Testing the DeLone-McLean Model of Information System Success
Hair, J. F., Black, W. C., Babin, B. J., & Anderson, R. E. (2010). Multivariate data analysis (7th ed.). Upper Saddle River, NJ: Prentice Hall.
Kline, R. B. (1998). Principles and practice of structural equation modeling. New York, NY: Guilford Press.
Hayduk, L., Cummings, G. G., Boadu, K., Pazderka-Robinson, H., & Boulianne, S. (2007). Testing! Testing! One, two three – testing the theory in structural equation models! Personality and Individual Differences, 42(2), 841–850. doi:10.1016/j.paid.2006.10.001
Kline, R. B. (2005). Principles and practice of structural equation modeling (2nd ed.). New York, NY: The Guilford Press.
Holsapple, C. W., & Lee-Post, A. (2006). Defining, assessing, and promoting e-learning success: An Information Systems perspective. Decision Sciences Journal of Innovative Education, 4(1), 67–85. doi:10.1111/j.1540-4609.2006.00102.x Hooper, D., Coughlan, J., & Mullen, M. R. (2008). Structural equation modeling: Guidelines for determining model fit. The Electronic Journal of Business Research Methods, 6(1), 53–60. Hoyle, R. H., & Panter, A. T. (1995). Writing about structural equation models. In Hoyle, R. H. (Ed.), Structural equation modeling: Concepts, issues, and applications (pp. 158–176). Thousand Oaks, CA: Sage Publications. Hu, L. T., & Bentler, P. M. (1999). Cutoff criteria for fit indexes in covariance structure analysis: Conventional criteria versus new alternatives. Structural Equation Modeling, 6(1), 1–55. doi:10.1080/10705519909540118 Jöreskog, K. G., & Sörbom, D. (1989). LISREL 7 user’s reference guide. Chicago, IL: SPSS Publications. Jöreskog, K. G., & Sörbom, D. (1993). LISREL 8: Structural equation modeling with the SIMPLIS command language. Hillsdale, NJ: Lawrence Erlbaum Associates Publishers. Kettinger, W. J., & Lee, C. C. (1994). Perceived service quality and user satisfaction with the Information Service function. Decision Sciences, 25(5/6), 737–765. doi:10.1111/j.1540-5915.1994. tb01868.x
102
Landry, B. J. L., Griffeth, R., & Hartman, S. (2006). Measuring student perceptions of Blackboard using the technology acceptance model. Decision Sciences Journal of Innovative Education, 4(1), 87–99. doi:10.1111/j.1540-4609.2006.00103.x Livari, J. (2005). An empirical test of the DeLoneMcLean model of Information System success. The Data Base for Advances in Information Systems, 36(2), 8–27. MacCallum, R. C., Browne, M. W., & Sugawara, H. M. (1996). Power analysis and determination of sample size for covariance structure modeling. Psychological Methods, 1, 130–149. doi:10.1037/1082-989X.1.2.130 McDonald, R. P., & Ho, M.-H. R. (2002). Principles and practice in reporting statistical equation analyses. Psychological Methods, 7(1), 64–82. doi:10.1037/1082-989X.7.1.64 Rai, A., Lang, S. S., & Welker, R. B. (2002). Assessing the validity of IS success models: An empirical test and theoretical analysis. Information Systems Research, 13(1), 50–69. doi:10.1287/ isre.13.1.50.96 Schumacker, R. E., & Lomax, R. G. (2004). A beginner’s guide to structural equation modeling (2nd ed.). Mahwah, NJ: Lawrence Erlbaum Associates, Publishers. Schumacker, R. E., & Lomax, R. G. (2010). A beginner’s guide to structural equation modeling (3rd ed.). New York, NY: Routledge, Taylor and Francis Group.
Testing the DeLone-McLean Model of Information System Success
Tanaka, J. S. (1993). Multifaceted conceptions of fit in structural equation models. In Bollen, K. A., & Long, J. S. (Eds.), Testing structural equation models (pp. 10–39). Newbury Park, CA: Sage Publications. Tanaka, J. S., & Huba, G. J. (1984). Confirmatory hierarchical factor analyses of psychological distress measures. Journal of Personality and Social Psychology, 46, 621–635. doi:10.1037/00223514.46.3.621 Venkatesh, V., & Davis, F. D. (2000). A theoretical extension of the technology acceptance model: Four longitudinal field studies. Management Science, 46(2), 186–204. doi:10.1287/ mnsc.46.2.186.11926 Venkatesh, V., & Morris, M. G. (2000). Why don’t men ever stop to ask for directions? Gender, social influence, and their role in technology acceptance and usage behavior. Management Information Systems Quarterly, 24(1), 115–139. doi:10.2307/3250981 Venkatesh, V., Morris, M. G., Davis, G. B., & Davis, F. D. (2003). User acceptance of Information Technology: Toward a unified view. Management Information Systems Quarterly, 27(3), 425–478. Wang, Y.-S., Wang, H.-Y., & Shee, D. Y. (2007). Measuring e-learning systems success in an organizational context: Scale development and validation. Computers in Human Behavior, 23(4), 1792–1808. doi:10.1016/j.chb.2005.10.006
KEY TERMS AND DEFINITIONS Information Quality: The degree to which information generated has the distinctive characteristics of accuracy and precision, currency, timeliness, reliability and dependability, understandability, format, relevance, completeness and usefulness.
Structural Equation Modeling: A set of multivariate statistical data analysis tools including regression, path analysis, confirmatory factor analysis, and structural equation models. All of these models follow the same process of modeling: specification, identification, estimation, testing, and modification. Structural equation models are a combination of regression, path analysis, and confirmatory factor models. Solving structural equation models involves the use of concepts and statistical techniques in regression, path analysis, and confirmatory factor models. System Quality: The degree to which the system has the distinctive characteristics of the information system itself measured by response time, systems accessibility, system reliability, systems flexibility, systems usefulness, ease of use, ease of learning, etc. System Use: The degree to which the user is dependent on the information systems for carrying out their tasks and the frequency of information system use. System use includes both information use and information system use. Information system uses are broadly measured by a wide range of attributes such as frequency, regularity, and purposes of use (cost reduction, management, production, strategic planning, etc.). It is also measured by the number of reports and queries generated, acceptance of information and report, and the actual amount of system use time, such as time for computer session. Systems Outcomes: The degree to which the user believes that using information systems has enhanced his or her job performance. It can be measured by user confidence, efficient decisions, time to arrive at a decision, time to complete a task, decision quality, changes in decision behavior, amount of data considered, etc. User-Satisfaction: The degree of user satisfaction with an information system itself, with software, hardware, and satisfaction from the differences between information needed and information received.
103
Testing the DeLone-McLean Model of Information System Success
APPENDIX Model Fit Criteria The model fit can be assessed by examining both global fit measures and individual parameter estimates. The chi-square test and the root-mean-square error of approximation (RMSEA) are the best examples of global fit measures. The chi-square test measures the overall fit of the model to data. It does not provide specific details of statistical significance of individual parameters in the model.
Stand-Alone/Absolute Indexes Chi-square (χ2) Chi-square is a measure of overall fit of the model to data. It measures the discrepancy between the sample covariance matrix and the fitted covariance matrix. Chi-square tests can be conducted on actual covariance and variance numbers, not correlation coefficients. Chi-square is an indicator of badness of fit. Therefore, a smaller chi-square value indicates good fit and a large chi-square value indicates poor fit. Chi-square (χ2) goodness-of-fit test formula is: c2 = ∑
( f0 − fe )2 fe
with df = k -1 -c where fo = covariance frequency of observed sample covariance matrix; fe = covariance frequency of expected model-implied covariance matrix; k =number of distinct values in covariance matrix; c = number of parameters being estimated from the sample data. When interpreting χ2, the saturated model (just-identified model) always has χ 2 = 0. This output will indicate a perfect fit. The range of χ2 value is from zero for the saturated model to a maximum for the independence model with no paths. Therefore, the theoretical model’s χ2 lies in the range. The acceptable level will be determined by comparing obtained χ2 values with table values for a given degrees of freedom. If the observed χ2 value is greater than the critical table χ2 value, then the null hypothesis (sample covariance matrix = the reproduced model-replied covariance) will be rejected. If not, it will be accepted. The χ2 value has some limitations. The χ2 value increases as sample size increases. Moreover, the number of free parameters in the model influences the estimated covariance matrix. Therefore, many other goodness-of-fit indexes were developed to improve the limitations of the χ2 value. The RMSEA addressed these issues of sample size and degrees of freedom to overcome the limitation. Root-Mean-Square Error of Approximation (RMSEA) RMSEA statistics use the error of approximation in the population to evaluate how the theoretical model fits the population covariance matrix. The approximation error in SEM is the discrepancy between the population parameter values (covariance matrix) and the model’s estimated values. This discrepancy,
104
Testing the DeLone-McLean Model of Information System Success
measured by the RMSEA, is expressed per degree of freedom. Therefore it is sensitive to the numbers of estimated parameters and sample size. RMSEA goodness-of-fit test formula is:
RMSEA =
c2 -1 DF N -1
Where N= sample size, DF= Model’s degrees of freedom χ2 = normal theory weighted least square chi square In Figure 10 (Appendix), RMSEA is 0.0696, which can be computed as the square root of [(472.838/111)1]/[674-1], using normal theory weighted least square χ2=5.81, DF =111, and sample size (N)=674. RMSEA cannot be negative and ranges 0 (perfect fit) and 1 (no fit). The following criteria are suggested to interpret the RMSEA values (Byrne, 1998; MacCallum, Browne, & Sugawara, 1996). • • • •
RMSEA Value < .05 good fit .05 < RMSEA Value < .08 reasonable fit .08 < RMSEA Value < .10 mediocre fit RMSEA value > .10 poor fit
Many goodness-of-fit indexes are computed based on the χ2 value of the saturated model and independence model, degrees presented and justified a re-specified and slightly extended version of the DeLone and McLean model of freedom in the saturated model and independence model, and the number of distinct values in the sample covariance matrix. The saturated model has all of the possible paths specified, while the independence (null) model has no paths at all. The χ2 value ranges from zero (a saturated model) to a maximum value which can be found on LISREL goodness-of-fit statistics output page. This maximum value is listed as “Chi-Square for the Independence Model” with its degrees of freedom. The Goodness-of-Fit-Index (GFI) GFI measures the relative amount of the variances and covariances in the observed (empirical) covariance matrix (S) that is predicted by the reproduced (model-implied) covariance matrix (∑) (Jöreskog & Sörbom, 1989; J. S Tanaka & Huba, 1984). Measures of goodness of fit typically summarize the discrepancy between the values in the observed matrix (S) and the values expected under the model in question (reproduced matrix ∑). GFI = 1 – (χ2model/χ2null) The GFI can also be defined as [1 - .5*trace(S-∑)2]. It requires the covariance matrix values of the observed matrix (S) and reproduced matrix (∑) to computing GFI values. Suppose we are given the following two matrices (original covariance matrix (S) and reproduced covariance matrix (∑). 105
Testing the DeLone-McLean Model of Information System Success
Figure 10. Goodness of fit statistics
GFI = 1 - .5*trace(S-∑)2 The trace of a matrix is the sum of the elements along the principal diagonal of a square matrix. Trace S, therefore, equals 1.345(.603+.307+.435). Trace ∑ equals 1.1367(.342+.271+.5237). GFI = 1-(.10415)=0.89585. This can be interpreted as 89.595% of the variance and covariance in the original covariance matrix being predicted by the model-implied matrix ∑. Adjusted Goodness-of-Fitness Index (AGFI) AGFI is the GFI value adjusted by the number of degrees of freedom for the proposed model and the number of unique distinct values in the matrix S (the number of degrees of freedom for the null model). AGFI= 1 – [(k/df)(1-GFI) where k = the number of unique distinct values in the matrix. = 1- [(153/111)(1-.924)]=.895
106
Testing the DeLone-McLean Model of Information System Success
Table 8. The reproduced covariance matrix
Table 7. The original covariance matrix 0.603
0.342
0.571
0.307
0.364
0.153
0.435
0.735
0.271
0.734
0.543
0.5237
Table 9. Computing RMR Reproduced Correlation
Mean
Original
Diff.
Diff. Squared
0.735
0.571
0.164
0.0269
0.734
0.364
0.37
0.1369
0.543
0.153
0.39
0.1521
0.670666667
0.362667
0.308
0.105298667
RMR
0.32449756
The Root Mean Square Residual (RMR) and Standardized RMR The root-mean-square-residual, RMSR or RMR, is the square root of the difference between the residuals of the sample covariance matrix and the model-implied (reproduced) covariance matrix. Using Tables 7 and 8, the RMR/RMSR is computed as shown in Table 9. First, subtract the predicted correlations from the actual correlations for each correlation in the offdiagonal matrix. The three variable correlations matrix have three diagonal cell values and three offdiagonal cell values as shown in Table 8. The “Diff” column in Table 9 shows the result of the subtraction. Second, square the result, as shown in the “Diff. Squared” column of Table 9. Third, take the average value of the three values in the “Diff. Squared” column. The final step is to take the square root of the previous value in step three. The RMR/RMSR is a standard deviation of the residuals, and it is relative to the magnitudes of the covariance matrices. Therefore, it is difficult to interpret, and the acceptance level must be defined by each researcher (Rex B. Kline, 2005; Schumacker & Lomax, 2004). For that reason, the standardized RMR (SRMR) is much more meaningful to interpret. Values for the SRMR range from zero to 1. The acceptable threshold levels are less than .05, and values as high as 0.08 are deemed acceptable (Byrne, 1998; Diamantopoulos & Siguaw, 2000; Hu & Bentler, 1999).
Incremental/Comparative Fit Indices This group of indices is classified as incremental or comparative index of fit. Comparative fit is based on a comparison of the hypothesized model against the “null” model. The null model, a.k.a. the independence or baseline models, represents a hypothesis of “no effect”. In other words, all variables in the model are uncorrelated. Therefore there is no path in the model. Normed-Fit Index (NFI) NFI is also known as the Bentler-Bonett Index. It compares the χ2 value of the hypothetical model to that of the null model. NFI was introduced by (P.M. Bentler & Bonnet, 1980). Its formula is:
107
Testing the DeLone-McLean Model of Information System Success
NFI= [χ2Null - χ2Model]/ [χ2Null] Using the fit statistics in Figure 10, NFI can be computed as follows. χ2Null = 39278.576 χ2Model = 472.838 NFI = [39278.576 -472.838]/ 39278.576 = .988 In addition to sample size, NFI has also some limitations to address the issues of model parsimony. To deal with both issues of sample size and model parsimony, the incremental index of fit (IFI) was developed to take the degrees of freedom into account when computing IFI (Bollen, 1989). Non-normedfit index (NNFI) is non-normed and difficult to interpret. The Comparative Fit Index (CFI) Bentler (1990) realized some drawbacks of NFI, such as underestimating fit in small samples, and proposed the comparative fit index (CFI), which is directly based on the non-centrality measure. The Comparative Fit Index (CFI) is computed as:
CFI=1-
Max [ (c2model-DF model ), 0]
Max [ (c2model-DF model), (c2null-DFnull) ,0]
Where Max denotes the maximum of the values given in the brackets, χ2Null is the chi-square of independence (null) model, χ2Model is the chi-square of the proposed model, DFnull is the number of degrees of freedom for independence (null) model DFmodel is the number of degrees of freedom for proposed model Using the fit statistics in Figure 10, CFI can be computed as follows. CFI= [(472.838-136) –(472.838-111)]/472.838-136= .991 As Figure 10 shows, the goodness of fit statistics include an extensive array of fit indices that can be categorized into six different subgroups of statistics that may be used to determine model fit. For a very good overview of LISREL goodness of fit statistics, readers are referred to (Byrne, 1998, pp.109-119.; Hooper, Coughlan, & Mullen, 2008). There seems to be agreement among SEM researchers that it is not necessary to report every goodness of fit statistic from path analysis output (Figure 10). For SEM beginners, it is not an easy task to choose a set of fit indices in the assessment of model fit, due to the complexity and multiple dimensions involved in the choice of good indices (J. S. Tanaka, 1993). Although there are no golden rules that can be agreed upon, Table 10 shows several indices that have been frequently reported and suggested to be reported in
108
Testing the DeLone-McLean Model of Information System Success
the literature (Boomsma, 2000; Crowley & Fan, 1997; Hayduk, Cummings, Boadu, Pazderka-Robinson, & Boulianne, 2007; Hooper, Coughlan, & Mullen, 2008; Rex B. Kline, 2005; McDonald & Ho, 2002). Table 10. Fit indices and acceptable threshold levels based on (Hooper, Coughlan, & Mullen, 2008; Hoyle & Panter, 1995) Fit Index
Acceptable Threshold Levels
References
Absolute Indices Chi-Square (χ2)
Low χ2 relative to degrees of freedom with an insignificant p value (p less than 0.05)
Relative Chi-Square (χ2/df)
2:1
Tabachnik and Fidell, 2007
3:1
Kline, 2005
Root Mean Square Error of
less than 0.08
Steiger 2007
Approximation (RMSEA) Goodness-of-fit index (GFI)
greater than 0.90
Adjusted GFI (AGFI)
greater than 0.90
Root Mean Square Residual (RMR)
Good models have small RMR
Standardized RMR (SRMR)
less than 0.08
Hu and Bentler, 1999
Incremental Indices Normed-fit-index (NFI)
greater than 0.90
Non-Normed-fit-index (NNFI)
greater than 0.90
Comparative Fit Index (CFI)
greater than 0.90
109
110
Chapter 6
An Introduction to Structural Equation Modeling (SEM) and the Partial Least Squares (PLS) Methodology Nicholas J. Ashill American University of Sharjah, United Arab Emirates
ABSTRACT Over the past 15 years, the use of Partial Least Squares (PLS) in academic research has enjoyed increasing popularity in many social sciences including Information Systems, marketing, and organizational behavior. PLS can be considered an alternative to covariance-based SEM and has greater flexibility in handling various modeling problems in situations where it is difficult to meet the hard assumptions of more traditional multivariate statistics. This chapter focuses on PLS for beginners. Several topics are covered and include foundational concepts in SEM, the statistical assumptions of PLS, a LISREL-PLS comparison and reflective and formative measurement.
MAIN FOCUS What is Structural Equation Modeling? Structural equation Modeling (SEM), also referred to as ‘causal Modeling’, has become a popular tool in the methodological arsenal of social sciDOI: 10.4018/978-1-60960-615-2.ch006
ence researchers (Bagozzi & Baumgartner, 1994; Chau, 1997). SEM is a method for representing, estimating, and testing a theoretical network of (mostly) linear relations between variables, where those variables may be either observable or directly unobservable (Hair, Black, Babin, Anderson, & Tatham, 2006). The multivariate technique combines aspects of multiple regression (examining dependence relationships) and factor analysis (representing unmeasured concepts or factors with
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
An Introduction to Structural Equation Modeling (SEM)
multiple variables) to estimate a series of interrelated dependence relationships simultaneously. The issue of simultaneity is especially important since the measures often derive their meaning from the conceptual network within they are embedded. SEM is grounded in three main premises. First, from the field of psychology comes the belief that the measurement of a valid construct cannot rely on a single measure. Second, from the field of economics comes the conviction that strong theoretical specification is necessary for the estimation of parameters. Third, from the field of sociology comes the notion of ordering theoretical variables and decomposing types of effects. Taken as a whole, these ideas have emerged into what is called latent variable structural equation modeling (Falk & Miller, 1992). The two common approaches for SEM are the covariance-based approach used in LISREL (Linear Structural Relations), AMOS (Analysis of Moment Structures), and EQS (Anderson & Gerbing, 1988; Bentler, 1995; Bollen, 1989; Bollen & Long, 1993; Byrne, 1994; Jöreskog & Sörbom, 1982 1988), and the variance-based approach used in PLS-PC, PLS-Graph, Smart-PLS and XLSTATPLS (Chin, 1995 1998; Esposito Vinzi, Chin, Hensler, & Wang, 2010; Fornell & Cha, 1994; Hansmann & Ringle, 2004; Wold, 1985). Both approaches belong to the family of techniques that Fornell (1987) and Lohmoller (1989) call “the second generation of multivariate data analysis techniques”. Unlike first generation techniques such as multiple regression, principal components and cluster analysis, canonical analysis, discriminant analysis and others, second generation models are able to bring together psychometric and econometric analysis in such a way that the best features of both are exploited (Fornell & Larcker, 1981; Fornell, 1987). SEM can therefore be viewed as an extension of several first generation multivariate techniques (Hair et al. 2006) because they incorporate the psychometrician’s notion of unobserved latent variables (constructs) and measurement error in the same estimation
procedure. In social sciences research theoretical constructs are typically difficult to operationalize in terms of a single measure, and measurement error is often unavoidable. Therefore, given an appropriate statistical testing method, structural equation models are recognized as indispensable for theory evaluation in this type of research. Traditional first generation techniques have a number of limitations. First, the statistical tests of the regression coefficients (and the use of procedures like stepwise regression) make assumptions of the data that may not hold, such as sufficient sample size and multivariate normal distribution. Second, the two-step process of aggregating variables to form variate scores and then testing the relationships among these variates presumes that the relative importance of items in each composite is portable across theoretical contexts, an assumption that may not be valid (Fornell, 1982). In traditional multiple regression and path analysis, scales of the latent variables are created by either averaging, summing, or according to some kind of factor analysis of observed variables, the results are then imported into a regression (or path) model. The assumption is that such scores are portable, an assumption that Fornell (1987) argued is not tenable. This two-stage analysis can potentially result in invalid estimates, since it assumes that the relationship among the measures of a construct is independent of the theoretical context within which the measures are embedded (Fornell, 1982; Fornell & Yi, 1992; Hirschheim, 1985). Third, all measurement is made with error, and though error may be estimated using methods such as factor analysis, these error estimates do not explicitly figure in regression analysis, nor are they estimated within the context of the theory being tested (Fornell, 1982). Fourth, each first generation technique can examine only a single relationship at a time i.e., a single relationship between a dependent variable and an independent variable (Hair et al. 2006). In contrast SEM can estimate many equations at once, and they can be interrelated, meaning that the dependent variable
111
An Introduction to Structural Equation Modeling (SEM)
in one equation can be an independent variable in other equations. Both covariance-based and variance-based approaches such as LISREL and PLS allow for causal interpretations of the relations between the latent variable and the indicators, as well as the relations among the latent variables. Both techniques also allow constructs to be measured with multiple indicators, thus minimizing biases imposed by measurement error (Herting, 1985; Kenny, 1979). SEM also has the added benefit of being able to model both direct and indirect relationships among constructs (or latent variables) to determine the relative importance of antecedent constructs, making it possible to test complex theoretical models. This is an advantage over traditional path analysis where the indirect effects need to be calculated by hand (Barclay, Higgins, & Thompson, 1995). Three types of effects may be distinguished with SEM: direct, indirect and total effects. The direct effect is that influence of one variable on another that is unmediated by any other variable in a path model. The indirect effects of a variable are mediated by at least one intervening variable. The sum of the direct and indirect effects is the total effects, in other words, one variable’s total effect on another is the sum of its direct effect and indirect effects (Bollen, 1989). SEM also provides the means to resolve thorny problem of multicollinearity (Rigdon, 1998). By using multiple items in a questionnaire, the items are modeled as measures of the same common factor, and only the factor is used as a (single) structural variable. The principal component i.e., the factor explaining the most variance is used as the most reliable and valid observable indicator reflecting each of the unobservable research constructs (latent variables). Thus, all of the multiple measures are included in the model, but only one variable enters the prediction equation. High correlations among the multiple items actually improve the stability of the factor analytic measurement model.
112
Structural Equation Modeling Using Partial Least Squares Partial Least Squares (PLS), a relatively new, powerful multivariate analysis technique with roots in path analysis (Wold, 1985), is ideal for testing structural models involving multiple constructs with multiple latent variables (Fornell, 1982 1987; Lohmoller, 1989; Wold, 1982 1985). The PLS estimation approach is discussed in detail in many books and articles that deal with the theoretical and application issues in SEM (Barclay et al. 1995; Bollen, 1989; Chin, 1998; Esposito Vinzi et al. 2010; Fornell & Larcker, 1981; Fornell, 1982 1987; Fornell & Bookstein, 1982; Fornell & Cha, 1994; Hayduk, 1987 1996). PLS has gained interest and use in various disciplines in recent years including information systems (Chin & Gopal, 1995; Compeau & Higgins, 1995; Rivard & Huff, 1988; Thompson, Higgins, & Howell, 1994), marketing (Ashill & Jobber, 2009; Hulland, 1996; Hulland & Kleinmuntz, 1994; Johnson & Fornell, 1987), and organizational behaviour (Howell & Higgins, 1990; Lee, 2007). SEM with PLS is a conceptual approach to data analysis involving the interplay of theoretical thinking and empirical data. This approach to inquiry embraces abstract and empirical variables simultaneously and recognizes the interplay of these two dimensions of theory development. Empirical data need not achieve high-level precision but simply represent realistic attempts to observe the world around us. Falk and Miller (1992, p.92-93) comment: “The more sophisticated the theory and precise the observations, the more our work approaches the scientific goal of understanding causal mechanisms. Before we reach that point however, we need a research tool that allows us to examine the immense complexity that exists in the social and behavioral sciences. Professor Wold had this in mind when he developed soft modeling”.
An Introduction to Structural Equation Modeling (SEM)
The term “Soft Modeling” indicates that model building applies when theoretical knowledge is scarce and stringent distribution assumptions are not applicable. Soft modeling provides a system for expressing theoretical ideas about a sequence of events (Falk & Miller, 1992; Lohmoller, 1989; Wold, 1980) and can be viewed as a method of estimating the likelihood of an event given information about other events. It is not intended to be a system for the assessment of causation, but is particularly applicable when the conditions of a closed system i.e., a set of theoretical, measurement and distributional conditions, are not met. That is, with soft modeling, the theoretical component simply accounts for as much of the variance in the measured variables as possible. Using this procedure, the highest percentage of common variance among the measured variables is extracted. As a result, the component maximally predicts the variance of the individual manifest variables. As one moves away from the powerful conditions required for a closed system, the concept of causation must be abandoned and be replaced by the concept of predictability (Wold, 1980 1985). Within the context of multivariate statistics, PLS, unlike LISREL, is a least squares estimation procedure that makes few assumptions about the nature of the data. In PLS optimal linear relationships are computed between latent variables and are interpreted as the best set of predictions available for a given study considering all the limitations. PLS thus has a philosophical as well as statistical/mathematical position. Wold (1980 1982) contends that the work of science is an interplay between ideas about the world and our observations. Such a position is consistent with the modern philosophy of science, which views science as the union of theory and empirical observations (Ackermann, 1985). Some sciences are endowed with both strong theory and precise empirical observations. Under these conditions there are well known data analysis procedures available. In the social sciences however, soft
theory and soft empirical observations are the rule where there are conditions of low information. Although PLS has a rigorous mathematics base, the mathematical model is soft in that it makes no measurement, distributional or sample size assumptions. Lohmoller (1989, p.64) states that “it is not the concepts nor the models nor the estimation techniques which are ‘soft’, only the distributional assumptions”.
A Comparison of LISREL and PLS The fundamental differences between LISREL and PLS are reflected in the PLS and LISREL algorithms and their optimum properties. The purpose of LISREL is to model the covariance structure of the manifest variables (Wold, 1985). This technique is based on maximum likelihood or generalised least squares and is used where prior theory is strong and further testing is the objective of the research (Jöreskog and Sörbom, 1984; Pedhauzur, 1982). LISREL is a model for theory testing and a general method for covariance structure analysis in which a theoretical model is specified in terms of covariances and tested against empirical data (Fornell & Bookstein, 1982). The aim of LISREL is therefore to estimate causal model parameters (e.g., loadings and paths) such that the discrepancies between the initial empirical covariance data matrix, and the covariance matrix deduced from the model structure and the parameter estimates, are minimized. It is concerned with the entire covariance matrix. The focus is of LISREL is thus on fitting the data to the structural model proposed and generating the path coefficients that have the greatest likelihood of being from this sample’s population data set. Accordingly, it is concerned with regenerating covariance structures in the data and not explaining variance in the dependent measures. The emphasis is on overall model fit, i.e., testing a strong theory as a whole (Barclay et al. 1995). Indicators are also reflective of constructs. Data are assumed to
113
An Introduction to Structural Equation Modeling (SEM)
be multivariate normal, and sample sizes must be relatively large (Wold, 1985). In contrast, PLS is not a model in the same sense. The objective in PLS is to estimate the model parameters based on the ability to minimize the residual variances of dependent variables (both latent and observed) (Chin, 1998). Instead of covariance structure analysis, it belongs to the same class of models as canonical correlation, principal components, and regression analysis. The conceptual core of PLS is an iterative combination of principal components analysis relating measures to constructs, and path analysis permitting the construction of a system of constructs. Being a components-based structural equations modeling technique, PLS simultaneously models the structural paths (i.e., theoretical relationships among latent variables) and measurement paths (i.e., relationships between a latent variable and its indicators). The path coefficients obtained from a PLS analysis are standardized regression coefficients, while the loadings of items on individual constructs are factor loadings (Hulland, 1999). Factor scores created using these loadings are equivalent to weighted composite indices. Thus, PLS results can be easily interpreted by considering them in the context of regression and factor analysis. PLS provides an advantage over regression for two reasons: (1) it considers all path coefficients simultaneously to allow the analysis of direct, indirect and spurious relationships, and (2) it estimates the individual item weightings in the context of the theoretical model rather than in isolation. Rather than assume equal weights for all the indicators of a scale, the PLS algorithm allows each indicator to vary in how much it contributes to the composite score of the latent variable. Thus indicators with weaker relationships to related indicators and the latent construct are given lower weightings. Chin (1998) states that PLS is preferable to techniques such as regression, which assume error free measurement (Lohmoller, 1989; Wold, 1982, 1985). This makes LISREL
114
‘closer to the model, more confirmatory and more model analytic’, and PLS ‘closer to the data, more explorative and more data analytic’ (Lohmoller, 1989, p. 213). The objective of the researcher therefore, and the stage of development of the theory under consideration, become key criteria in methodology selection (Barclay et al. 1995). There are other features of LISREL which are more theoretically compelling but may not apply in a given research situation. For example, LISREL is more elegant with respect to the types of models that can be proposed. It handles nonrecursive relationships, permits the comparison of the fit of a model between groups, and allows the modeling of correlated error terms. PLS does not allow for these specifications and assumes uncorrelated errors as in regression. In addition, LISREL offers a number of measures of overall model ‘fit’ such as χ² goodness-of-fit. PLS does not yet have these overall measures, relying instead on variance explained (i.e., R-square) as an indicator of how well PLS has met its objective (Cohen, Cohen, Teresi, Marchi, & Velez, 1990). The philosophical distinction between these two approaches is whether to use SEM for theory testing and development or for predictive purposes (Anderson & Gerbing, 1988; Chin, 1998). Barclay et al. (1995) recommend PLS for predictive research models in the initial exploratory stages of theory development, when the conceptual model and the measures are not well developed, whereas covariance based estimation methods such as LISREL, AMOS and EQS are more suited for testing, in a confirmatory sense, how well a theoretical model fits observed data, generally requiring much stronger theory than PLS (Fornell & Bookstein, 1982; Jöreskog & Wold, 1982). With covariance based approaches to SEM, a complete description of the theory is represented in a structural model and a theoretical rationale is offered for each proposed causal relationship. These two assumptions require a well-developed and stable theory in order for the proposed causal relationships to represent real-world phenomena.
An Introduction to Structural Equation Modeling (SEM)
In contrast, in soft modeling such as PLS, a set of relationships may be formulated derived from embryonic theory, or previous research findings, or the problem at hand (Wold, 1985). Chin (1998) also states that compared to the better known factor-based covariance fitting approach for latent structural modeling (e.g., LISREL), the component-based PLS avoids two serious problems: inadmissible solutions and factor indeterminacy (Fornell & Bookstein, 1982). PLS incorporates defined constructs, as in principal components analysis, which means the constructs are estimated as weighted linear aggregates of their indicators are therefore completely defined by their indicators, and component scores are readily available. Constructs in LISREL are indeterminate of factors in the factor analytic tradition, which means the constructs contain surplus, untapped meaning. This results in an inability to develop factor scores. Due to this indeterminacy of factor score estimates, there exists a loss of predictive accuracy. However, the PLS approach estimates the latent variables as exact linear combinations of the observed measures, thus avoiding the indeterminacy problem and providing an exact definition of component scores (Chin, 1998). This makes PLS much more predictive in a traditional regression sense and can enhance ‘knowledge’ without making causal claims (Fornell & Bookstein, 1982; Blili, Raymond, & Rivard, 1998). Jöreskog and Wold (1982, p. 270) state: “Maximum Likelihood is theory-orientated, and emphasizes the transition from exploratory to confirmatory analysis. PLS is primarily intended for causal-predictive analysis in situations of high complexity but low theoretical information.” PLS is also considered better suited for explaining complex relationships than LISREL (Fornell & Bookstein, 1982; Fornell, Lorange, & Roos, 1990) since it readily accommodates complex theoretical and measurement models (Barclay et al. 1995). As stated by Wold (1985, p. 589),
“PLS comes into the fore in larger models, when the importance shifts from individual variables and parameters to packages of variables and aggregate parameters”. Wold (1985) states later (pp. 589-590), “In large, complex models with latent variables PLS is virtually without competition.” A large variable model in PLS can be estimated because a) least squares algorithms are highly efficient and, b) in PLS, the analysis is segmented or partitioned. Wold (1982) states that LISREL comes to the fore in problem areas where the models are relatively simple, namely where the stringent assumptions behind its optimality aspirations are realistic, and when the LISREL technique is not bogged down by too many parameters to estimate. When the problems under analysis become more complex, the stringent frequency assumptions of LISREL become less tenable, and the optimality aspirations become more or less illusory. The complexity of a model, in terms of the number of latent constructs and manifest variables, often makes identification, convergence, and goodness of fit difficult to achieve with the LISREL algorithm, and results in improper solutions (e.g., negative variances, correlations > 1). PLS does not encounter these situations. Soft modeling comes to the fore with the PLS estimates technique that aims at consistency in the statistical inference rather than optimality, and which provides “instant estimation” even for large models with a large number of parameters to estimate” (Wold, 1982). For example, Noonan (1979) examined 16 constructs measured with 59 manifest variables with over 1100 cases and claim that the LISREL algorithm would find this too difficult. PLS also have the advantage of being able to provide a robust estimation procedure with respect to various potential deficiencies in the model specification, such as multicollinearity and skew response distributions (Cassel, Hackl, & Westlund, 2000; Wold, 1980). No specific distributions are required with PLS and there are no assumptions
115
An Introduction to Structural Equation Modeling (SEM)
about the independence of observations. PLS is thus distribution-free allowing non-multivariate normal and non-interval scaled data. This is not the case with the maximum likelihood estimation method used in LISREL, which assumes a multivariate or normal distribution. Data from non-normal or unknown distributions violates one of the major assumptions for hard modeling procedures. A calculation requirement of maximum-likelihood is that the probability distribution be known and used. If the data do not conform to the distribution, then all estimates are biased. PLS on the other hand does not require a normal or known distribution. Rather, given any distribution, PLS produces the best set of predictive weights. Because the procedure is based on the least squares method, it provides unbiased with minimum variances around the estimates (Falk & Miller, 1992; Wold, 1985). When the average of many estimates of a particular value is close to the true parameter value, the estimate is said to be unbiased. Thus, if a parameter is estimated many times by a least squares method, the mean of the estimates will be close to the actual parameter and therefore unbiased. The size of the variance of the estimates i.e., the variance of the random sampling distribution of parameter estimates, will be smallest when least squares methods are used; and they always will provide the best linear approximation to the true parameter value (Falk & Miller, 1992). PLS is also the approach of choice with smaller sample sizes. A problem with LISREL is its strong assumptions of large sample sizes, whereas PLS can handle small sample sizes (Fornell, Tellis, & Zinkhan, 1982). PLS can deal with small sample sizes because the iterative algorithm behind PLS estimates parameters in only small subsets of a model during any given iteration. Once specified, the measurement and structural parameters of a PLS causal model are estimated in an iterative fashion using traditional ordinary least squares simple and multiple regressions. The PLS algorithm takes segments of complex
116
models and applies the same process until the entire model converges. At any given time, the iterative procedure is working with one construct and a subset of measures related to that construct, or to adjacent constructs in the model. It is this segmenting of complex models that allows PLS to work with small sample sizes (Wold, 1985). This data reduction procedure in PLS is fundamentally no different than adding the answers to several questions to create a scale score and then using the scale scores in subsequent analysis (Hulland, 1996). It is very similar to obtaining factor scores through principal component analysis and using the factor scores in future analysis. The subset estimation process consists of simple and multiple regressions so that the sample required for statistical analysis is that which would support the most complex multiple regression encountered in the model. In general, the most complex regression will involve: (1) the indicators from the most complex formative construct, or (2) the largest number of predictors leading to an endogenous construct, as predictors in an OLS regression. Sample size requirements, using the general rule of thumb of 10 times per predictor, become 10 times the number of predictors from (1) or (2) whichever is greater (Barclay et al. 1995; Wold, 1985)1. A weak rule of thumb, similar to the heuristic for multiple regression (Tabachnik & Fidell, 1989) would be to use a multiplier of 5 instead of 10 for the preceding formulae. An extreme example is given by Wold (1989) who analyzed 27 variables using two latent constructs with a data set consisting of 10 data cases. Lohmoller (1982) also presents an example of a model with 96 indicators and 26 constructs estimated with 100 cases. Using the general rule of thumb, 200 usage responses would be sufficient for statistical analysis using PLS if there are no more than 20 items for the most complex formative construct and if no construct has more than 20 incoming or out-going paths (endogenous construct). Smith and Barclay (1997) for example, examined 16 variables to test a model of selling partner relationship ef-
An Introduction to Structural Equation Modeling (SEM)
fectiveness. Each construct had between 2 and 9 items and there were no more than 4 in-coming or out-going paths. Statistical analysis using PLS was performed using 135 usable responses. In this case, the minimum sample size required to meet PLS criteria would have been 90 responses (the number of items on the most complex formative construct is 9 and this is greater than the number of incoming or out-going paths). Finally, PLS can model both formative (cause) and reflective (effect) indicators (Fornell & Bookstein, 1982; Fornell & Cha, 1994). A key decision for the researcher is whether formative or reflective indicators should be used in the data analysis. An underlying assumption for hard-modeling approaches such as LISREL, EQS and AMOS is that the items or indicators used to measure a latent variable are reflective in nature to be consistent with the statistical algorithm. In LISREL, formative indicators can be used indirectly by summing the weighted formative indicator scores to create a single-item measure. The weights may be unity or estimated using factor analysis outside the LISREL procedure. The single-item measure is then used to measure the emergent latent construct during model estimation in LISREL. On the other hand, PLS is better suited to handle formative indicators since it can estimate the formative indicator weights and loadings along with the structural model estimation. That is, in PLS, the measurement model for both the reflective and formative indicators is optimized in conjunction with the structural model. As a result, the formative indicators in PLS explain the highest amount of variance for the emergent construct as well as for the criterion construct of the predictor emergent construct. Since one of the strengths of structural equation modeling (SEM) lies in simultaneously estimating the measurement and structural models, using externally estimated construct scores diminishes this advantage of SEM when using LISREL with formative indicators (Bagozzi & Baumgartner, 1994; Diamantopoulos & Winklhofer, 2001; Fornell & Bookstein, 1982).
Reflective vs. Formative Indicators SEMs incorporating both exogenous and endogenous constructs2 can be modeled with formative or reflective indicators as dictated by theory. In the questionnaire design phase of the study, the researcher confronted the decision to model indicators as reflective or formative. A reflective variate or construct is one where the variables are expressed as a function of the variate (the observed variables are assumed to be caused by the latent variable). As the name suggests, reflective measures reflect an existing latent, unobservable construct. These measures are also called effect indicators since they are the effect of the latent construct (Bollen, 1989; Bollen & Lennox, 1991; Chin & Todd, 1995; Chin, 1998; Cohen et al. 1990; Fornell & Larcker, 1981; Fornell & Bookstein, 1982). Most social science researchers routinely use reflective indicators in their models, which means that observed variables (and their variances and covariances) are regarded as manifestations of underlying constructs. Reflective indicators are more consistent with how researchers typically view relationships between constructs and measures. In IT research for example, the perceived ease of use of an information system is typically specified as a reflective construct i.e., the ease of use of an information system is reflected in answers to a series of ease of use questions (Adams, Nelson, & Todd, 1992; Davis, Bagozzi, & Warshaw, 1989; Hendrickson, Massey, & Cronan, 1993). An alternative and, in the information systems literature, practically ignored measurement perspective is based on the use of formative (cause) indicators, and involves the construction of an index rather than a scale (Bollen & Lennox, 1991; Chin & Todd, 1995; Diamantopoulos & Winklhofer, 2001). If the latent construct is a categorization and measurement device for a complex phenomenon, the indicators can be specified as formative. An example is personal computer utilization i.e., utilization of a personal computer can be formed by length of use, frequency of use,
117
An Introduction to Structural Equation Modeling (SEM)
Figure 1. Reflective versus formative indicators
number of different software packages used, and the variety of tasks performed (Chin, 1998). As the name suggests, a formative variate or construct is one where the observed variables are assumed to cause a latent variable (the variate is expressed as a function of the observed variables). Formative measures are termed cause indicators since they cause or form the emergent latent unobservable construct (Bollen & Lennox, 1991; Chin & Todd, 1995; Chin, 1998; Edwards, 2001). Unlike reflective indicators, whereby “the latent variable causes the observed variables” (Bollen, 1989, p. 65), formative indicators can be viewed “as causing rather than being caused by the latent variable measured by the indicators” (MacCallum & Browne, 1993, p. 533). As shown in Figure 1, in the case of reflective indicators (R1, R2, R3, R4), the arrows point from the existing latent construct to the observed indicators, whereas in the case of formative indicators (F1, F2, F3, F4), the arrows point in the opposite direction, from the observed indicators to the emergent latent construct3. Since the direction of measure-construct relationship for reflective indicators is opposite to that for formative indicators, the decision to model indicators as either reflective or formative is an important one when using Structural Equation Modeling (SEM). The decision is even more critical when using the SEM-based partial least squares (PLS) methodology. This is because in PLS, the measurement for
118
reflective as well as formative indicators can be estimated along with the structural model. The designation of reflective or formative rests on the theoretical underpinnings of a construct. One cannot simply arbitrarily link sets of items to constructs. According to Bollen (1989) and Chin (1998) the decision to model indicators as reflective or formative is based on two important considerations. First, the choice between using formative or reflective indicators for a particular construct should always come back to what makes sense theoretically, that is the causal priority between the indicator and the latent variable (Bollen, 1989; Cohen et al. 1990). In other words, the researcher must establish whether the measures define the construct (formative), or whether the construct gives rise to the measures (reflective). Fornell et al. (1990) suggest that the researcher must provide a clear argument for choosing one form of epistemic relationship over the other for each construct. In doing so, it is necessary to employ strong theory and multiple measures to ensure acceptable content validity. Similarly, Hulland (1999) suggests that the researcher needs to think carefully about whether it is more correct to think of the underlying construct as ‘causing’ the observed measures i.e., a reflective relationship, or of the measures as ‘causing; (or defining) the construct i.e., a formative relationship. “When researchers use formative or reflective relationships in their models, their choice of a particular form of epistemic relationship should be both
An Introduction to Structural Equation Modeling (SEM)
justified clearly and applied consistently” (Hulland, 1995, p. 11). Second, for a decision to model indicators as reflective, the measures should be highly (and positively) correlated – both theoretically and empirically. Reflective indicators are created under the perspective that they all measure the same underlying phenomenon. With reflective measurement there should be no resistance to replacing one indicator with another indicator as long as the latter’s correlation with the first indicator is of a similar magnitude (Chin, 1998). In other words, a change in the latent construct or one of the measures of the latent construct should result in a reasonably large change in the same direction for all the other measures of the latent construct. A change in the latent variable will be reflected in a change in all indicators, since the “latent variable determines its indicators” (Bollen & Lennox, 1991, p. 306). If replacing indicators with equally reliable indicators causes any unease or skepticism, this probably signals that a reflective model may not be appropriate (Diamantopoulos, 1999). This is because “with effect indicators of a unidimensional construct... equally reliable indicators are essentially interchangeable” (Bollen & Lennox, 1991, p. 308). Being interchangeable is a key principle of reflective measures (Churchill, 1999; Diamantopoulos, 1999; Nunnally & Bernstein, 1994). Chin (1998) states that if indicators are not interchangeable, they should be modeled as formative. With formative measurement, the latent variable is formed by its indicators, which means that a change in the latent variable is not necessarily accompanied by a change in all of its indicators. For example, the latent variable socio-economic status with the indicators of income, occupational prestige, and education, can be thought of as a summary index of observed variables (Fornell & Larcker, 1981; Fornell & Bookstein, 1982). If an individual loses his job, then socio-economic status would be negatively affected. But to say that a negative change has occurred in an individual’s
socio-economic status does not imply that there was a job loss. Furthermore a change in an indicator such as income does not necessarily imply a similar directional change for the other indicators i.e., education and occupational prestige. Unlike reflective indicators, formative indicators can have positive, negative or no correlation with one another (Bollen & Lennox, 1991). Other examples of formative indicators would be the amount of beer, wine and hard liquor consumed as indicators of mental inebriation. Reflective indicators would be blood alcohol level, driving ability, MRI brain scan, and performance on mental calculations. If truly reflective, then an improvement in the blood alcohol level measure for an individual would also imply an improvement in the MRI activity and other measures since they are all meant to tap into the same concept or phenomenon. Whereas for the formative measures, an increase in beer consumption does not imply similar increases in wine or hard liquor consumption. Therefore, formative indicators need not be correlated. Existing measure development guidelines (Spector, 1992) focus almost exclusively on scale development, whereby items (i.e., observed variables) composing a scale are perceived as reflective (effect) indicators of an underlying construct (i.e., latent variable). Diamantopoulos and Winklhofer (2001) state that unlike scale development, for which detailed step-by-step guides exist (Churchill, 1999; Spector, 1992), guidelines for constructing indexes based on formative indicators are much harder to find. They suggest that four issues are essential to successful index construction: a) content specification, b) indicator specification, c) indicator collinearity, and d) external validity. With formative indicators, since the observed indicators are conceptualized as a mix of variables that in combination lead to (cause) the formation of the latent variable, examination of correlation or internal consistency is considered to be inappropriate and illogical (Bagozzi, 1994; Bollen, 1984; Chin, 1998). The very nature of formative measurement renders
119
An Introduction to Structural Equation Modeling (SEM)
internal consistency inappropriate for assessing the suitability of indicators. “The best we can do... is to examine how well the index relates to measures of other variables” (Bagozzi, 1994, p. 333). Since reliability measures such as Cronbach’s alpha, internal consistency and average variance extracted cannot be used for measuring the reliability of formative indicators (Chin, 1998), it is important to measure the emergent construct with a large number of indicators to adequately tap into the multidimensional and multifaceted domain of the construct (Bollen & Lennox, 1991). Nunnally and Bernstein (1994) state that ‘breadth’ of definition of a construct is extremely important to causal indicators because failure to consider all facets of the construct will lead to an exclusion of relevant indicators and thus exclude part of the construct itself. Given that correlation between the item scores and the true score of the latent variable is not presumed with formative measurement, the researcher needs to consider whether a set of indicators are the critical antecedent variables in the formation of the latent variable. To say that a construct combines into a broad factor is neither compelling nor complete. All constructs that form a construct should be included (a census of indicators rather than a sample) which means that a construct cannot be defined independently of its measures (Bollen & Lennox, 1991; Hulland, 1995). More specifically, the items used as indicators must cover the entire scope of the latent variable as described under the content specification. Hulland (1995) further notes that if the epistemic relationship is formative, additional measures of the construct are not possible and the researcher can never remove any of the measures regardless of how well or poorly they may load on the construct. However, Diamantopoulos and Winklhofer (2001) argue that the literature is unclear as to how to assess the suitability of formative indicators and virtually silent on the circumstances calling for the removal of invalid indicators from the index. From a theoretical perspective, elimination of indicators
120
carries the risk of changing the construct itself and should always be approached with caution (Diamantopoulos & Winklhofer, 2001). Another issue particular to formative indicators is that of multicollinearity. This is because the formative measurement model is based on a multiple regression and therefore the stability of the indicator coefficients is affected by the sample size and the strength of the indicator intercorrelations. Excessive collinearity among indicators thus makes it difficult to separate the distinct influence of the individual manifest variables on the latent variable. Chin (1998) warns that if the indicators are modelled as formative, it is important that the ‘indicators are relatively independent of one another, that is, there are no multicollinearity problems, and the sample size is large enough’ (Chin, 1998, p. 307). Also, the formative modeling option may be moot if the estimates are not stable, and the lack of multicollinearity is important if the researcher is concerned with understanding the formation process of the latent variable. Multicollinearity occurs when any single independent variable is highly correlated with a set of other independent variables (Churchill, 1999; Hair et al. 2006). The simplest and most obvious means of identifying collinearity is an examination of the correlation matrix for the independent variables. The presence of high correlations (generally.90 and above) is the first indication of substantial collinearity. However, the lack of high correlation values does not ensure a lack of collinearity and its existence may be due to the combined effect of two or more other independent variables. The two most common measures for assessing multicollinearity are tolerance and its inverse, the variance inflation factor (see Hair et al. 2006 for an overview of how these are calculated). In summary, the decision to model indicators as reflective is based on two important considerations. First, it should be possible to conceptually argue that the measures are the effects of the latent construct; second the measures should be highly (and positively) correlated – both theoretically and
An Introduction to Structural Equation Modeling (SEM)
empirically. If both conditions are not fulfilled, it may be more appropriate to model the indicators as reflective. Where items are developed to reflect a single latent construct, the latent variable is regarded as the cause of each of the item scores. As a result there is a presumed correlation between the item scores and the true score of the latent variable (DeVellis, 1991). With formative indicators, correlation between the item scores and the true score of the latent variable is not presumed.
Structural Equation Modeling Components and Data Analysis Using PLS The first step in studies involving structural equation models (SEM’s) is to explicitly specify both the structural (path) model and the construct-tomeasures relationships in the measurement model. The standard notation for specifying SEM models is: an exogenous construct (an independent variable) is specified as ξ and is shown as predicting or ‘causing’ an endogenous construct (a dependent variable) and is specified as η. Exogenous constructs are labelled ξ1, ξ2, ξ3 etc while endogenous constructs are labelled η1, η2, etc. In the first stage, the researcher needs to ensure that the items used as measures of the underlying constructs are both reliable and valid. The measurement model consists of the relationships between the observed variables (items) and the constructs, which they measure. With traditional first generation techniques, Cronbach’s alpha coefficients and/or factor analysis are used. In PLS, the loadings of the measures on their corresponding construct are examined. The characteristics of the measurement model demonstrate the construct validity of the research instruments i.e., the extent to which the operationalization of a construct actually measures what it purports to measure (Peter, 1981). Two important dimensions of construct validity are (a) convergent validity, including reliability, and (b) discriminant validity.
Once convinced of the adequacy of the measurement model, the researcher can then proceed to interpret the resulting model coefficients. The structural equations represent the paths among the constructs, and measurement equations represent the relationships between the indicators and the constructs that they measure. To assess the structural model, PLS produces standardized regression coefficients using ordinary least squares to minimize the residual variance. This evaluation consists of an assessment of the explanatory power of the independent variables, and an examination of the size and significance of the path coefficients, which represent hypotheses to be tested. Together the measurement and structural models form a network of measures and constructs (Fornell, 1982 1987; Fornell and Bookstein, 1982). The item weights and loadings indicate the strength of measures while the estimated path coefficients indicate the strength and sign of the theoretical relationships. Since PLS makes no distributional assumptions in its parameter estimation procedure, traditional parameter-based techniques for significance testing and model evaluation are considered to be inappropriate (Chin, 1998). The evaluation of PLS models is therefore based on prediction-orientated measures that are non-parametric (Chin, 1998). The PLS measurement (outer) model for reflective measures is evaluated by examining the convergent and discriminant validity of the indicators, and the composite reliability of a block of indicators. On the other hand, the formative measures are evaluated on the basis of their substantive content, by comparing the relative size of their estimated weights, and by examining the statistical significance of the measure weights (Chin, 1998). Wold (1982) argued that rather than based on covariance fit, evaluation of PLS models should apply prediction-oriented measures that are also nonparametric. Consistent with the distribution free, predictive approach of PLS (Wold, 1980 1982), the structural (inner) model is evaluated by assessing the percentage variance explained,
121
An Introduction to Structural Equation Modeling (SEM)
that is, the R-square for the dependent latent constructs, by using the Stone-Geisser Q-square test for predictive relevance (Geisser, 1975; Stone, 1974), and by examining the size of the structural path coefficients. The stability of the estimates is examined by using the t-statistics obtained from the bootstrap resampling procedure (100 resamples) in the PLS software (Efron & Gong, 1983; Wold, 1982). These statistical techniques are now briefly discussed. LISREL and other covariance structure analysis modeling approaches involve parameter estimation procedures, which seek to reproduce as closely as possible the observed covariance matrix. In contrast, PLS has its primary objective the minimization of error (or, equivalently, the maximization of variance explained) in all endogenous constructs. One consequence of this difference in objectives between LISREL and PLS is that no summary statistic or overall goodness-of-fit measures such as the likelihood ratio chi-square statistic exist for models using the latter. Instead, the variance explained (the R-square value) and the sign and significance of path coefficients are used to assess nomological validity i.e., how well each of the endogenous constructs is predicted (Cohen et al. 1990). R-square indicates the predictive power of the model and the values are interpreted in the same manner as R-square in a regression analysis. Moreover, the number of iterations required to converge on a solution provides an indication of how well the model fits the data (Hulland, 1999). In PLS, the structural (inner) model is also evaluated by using the Stone-Geisser Q-square test for predictive relevance (Geisser, 1975; Stone, 1974). The Q-square statistic represents a measure of how well the observed values are reconstructed by the model and its parameter estimates (Chin, 1998). The PLS adaption of the predictive sample reuse technique as developed by Stone (1974) and Geisser (1975) follows a blindfolding procedure that omits a part of the data for a particular block of indicators during parameter estimations
122
and then attempts to estimate the omitted part using the estimated parameters. This procedure is repeated until every data point has been omitted and estimated. As a result of this procedure, a generalized cross-validation measure can be obtained (Chin, 1998). Finally, to assess the statistical significance of the estimated path coefficients using PLS, researchers can use a jackknife or bootstrap analysis to determine t-values for each path (Fornell & Barclay, 1993). With jackknifing, the researcher does not have to assume that the underlying data are multivariate normal. Jackknifing is a nonparametric technique that is robust in the sense that it is not as affected by violations of the usual assumptions of normality associated with regression analysis (Fornell and Barclay, 1983). The jacknifing procedure creates a series of subsamples, removing one or more cases from the total data set in each case (Lohmoller, 1984). PLS is then run using each of the subsamples to arrive at separate path estimates. The distribution of estimates is then examined, yielding both a standard error and a corresponding t-value. The bootstrap also represents a nonparametric approach for estimating the precision of the PLS estimates. N samples are created in order to obtain N estimates for each parameter in the PLS model. Each sample is obtained by sampling with replacement from the original data set (Chin, 1998). The bootstrapping approach treats a random sample of data as a substitute for the population and resamples from it a specified number of times to generate sample bootstrap estimates and standard errors. These sample bootstrap estimates and standard errors are averaged and used to obtain a confidence interval around the average of the bootstrap estimates (Schumacker & Lomax, 1996). This confidence interval is used to determine how stable or good the sample statistic is as an estimate of the population parameter. Obviously, if the random sample initially drawn from the population is not representative, then the sample statistic and corresponding bootstrap estimator
An Introduction to Structural Equation Modeling (SEM)
obtained from resampling will yield misleading results. The Bootstrap approach is used in research situations where replication (in which additional samples are obtained) and cross validation (in which the sample is split) are not practical (Barclay et al. 1995). The jacknife estimation tends to take less time for standard error estimation under the joint assumption that the bootstrap procedure utilizes a confidence estimation procedure other than the normal approximation and the number of resamples are larger than those of the Jackknife. Conversely, the jackknife is viewed as less efficient that the bootstrap (Efron & Tibshirani, 1993).
Summary of the PLS Methodology As a SEM technique, PLS offers greater flexibility in testing theoretical models with empirical data, since it allows researchers to handle latent constructs, model relationships among several latent predictor constructs, and incorporate errors in measurement. Because of this flexibility, PLS provides a powerful way to understand the interaction between theory and data. The technique thus provides a better platform than traditional multivariate techniques from which to construct and verify theory. PLS is more appropriate than LISREL when models are complex, the goal of the research is explaining variance, where measures are not well established, and where any data distributions are appropriate. PLS is also ideally suited to the early stages of theory building and testing, and can be used to suggest where relationships might or might not exist and to suggest propositions for later testing. PLS is particularly applicable in research areas where theory is not as well developed as that demanded by covariance based approaches such as LISREL. Other strengths that make PLS appropriate for this study include its ability to handle formative constructs and its small sample requirements. At its core, PLS combines principal components analysis and path analysis to simultaneously estimate the parameters of a
causal model. Because the analysis is partitioned, sample size is less important in the overall model. The only requirement is that the sample size be larger than the number of manifest variables in the largest block. Covariance based approaches such as LISREL requires large sample sizes (e.g., 200 to 400 respondents), multinormal data, and a strong theoretical base for consistent estimators. The main problem lies with the theoretical basis and the use of formative indicators in the measurement model. PLS users do not have to contend with improper or inadmissible solutions and problems with assessing model fit sometimes encountered with LISREL. The following appendix illustrates the stepby-step procedures for using PLS-Graph version 3 (developed by Wynne W. Chin and Tim Frye). It is strongly recommended that the reader first consult the PLS-Graph User’s Guide which details the process of model creation, and generating measurement and structural model statistical output. The user guide can be found at: http://www. pubinfo.vcu.edu/carma/Documents/OCT1405/ PLSGRAPH3.0Manual.hubona.pdf.
REFERENCES Ackermann, R. J. (1985). Data, instruments and theory: A dialectical approach to understanding science. Princeton, NJ: Princeton University Press. Adams, D. A., Nelson, R. R., & Todd, P. A. (1992). Perceived usefulness, ease of use, and usage of information technology: A replication. Management Information Systems Quarterly, 16(2), 227–247. doi:10.2307/249577 Anderson, J. C., & Gerbing, D. W. (1988). Structural equation modeling in practice: A review and recommended two-step approach. Psychological Bulletin, 103(3), 411–423. doi:10.1037/00332909.103.3.411
123
An Introduction to Structural Equation Modeling (SEM)
Ashill, N., & Jobber, D. (2009). Measuring state, effect and response uncertainty: Theoretical construct development and empirical validation. Journal of Management. Retrieved from http://jom.sagepub.com/cgi/content/ abstract/0149206308329968v1 Bagozzi, R. P. (1984). A prospectus for theory construction in marketing. Journal of Marketing, 48(1), 11–29. doi:10.2307/1251307 Bagozzi, R. P., & Baumgartner, H. (1994). The evaluation of structural equation models and hypothesis testing. In Bagozzi, R. P. (Ed.), Principles of marketing research (pp. 386–422). Cambridge, MA: Blackwell. Barclay, D., Higgins, C., & Thompson, R. (1995). The partial least squares (PLS) approach to causal modeling: Personal computer adoption and use as an illustration (with commentaries). Technology Studies, 2(2), 285–324. Bentler, P. M. (1995). EQS structural equations program annual, multivariate software. Encino, CA: Multivariate Software. Blili, S., Raymond, L., & Rivard, S. (1998). Impact of task uncertainty, end-user involvement and competence on the success of end-user computing. Information & Management, 33(3), 137–153. doi:10.1016/S0378-7206(97)00043-8 Bollen, K. A. (1984). Multiple indicators: Internal consistency or no necessary relationship? Quality & Quantity, 18(4), 377–385. doi:10.1007/ BF00227593 Bollen, K. A. (1989). Structural equations with latent variables. New York, NY: John Wiley and Sons. Bollen, K. A., & Lennox, R. (1991). Conventional wisdom on measurement: A structural equation perspective. Psychological Bulletin, 110(2), 305–314. doi:10.1037/0033-2909.110.2.305
124
Bollen, K. A., & Long, J. S. (1993). Testing structural equation models. Newbury Park, CA: Sage Publications. Byrne, B. (1994). Structural equation modeling with EQS and EQS/Windows. Thousand Oaks, CA: Sage Publications. Cassel, C, M., Hackl, P., & Westlund, A.H. (2000). On measurement of intangible assets: A study of robustness of partial least squares. Total Quality Management, 11(7), S897–S907. doi:10.1080/09544120050135443 Chau, P. Y. K. (1997). Re-examining a model for evaluating information center success using a structural equation modeling approach. Decision Sciences, 28(2), 309–334. doi:10.1111/j.1540-5915.1997.tb01313.x Chin, W. W. (1995). Open peer commentary on Barclay, D. Higgins, C. & Thompson, R. The partial least squares (PLS) approach to causal modeling: Personal computer adoption and use as an illustration. Technology Studies, 2(2), 310–319. Chin, W. W. (1998). The partial least squares for structural equation modeling. In Marcoulides, G. A. (Ed.), Modern methods for business research (pp. 297–335). Mahwah, NJ: Lawrence Erlbaum Associates. Chin, W. W., & Gopal, A. (1995). Adoption intention in GSS: Relative importance of beliefs. The Data Base for Advances in Information Systems, 26(2/3), 42–64. Chin, W. W., & Todd, P. A. (1995). On the use, usefulness and ease of use of structural equation modeling in MIS research: A note of caution. Management Information Systems Quarterly, 19(2), 237–246. doi:10.2307/249690 Churchill, G. A. Jr. (1999). Marketing research: Methodological foundations (7th ed.). Orlando, FL: The Dryden Press.
An Introduction to Structural Equation Modeling (SEM)
Cohen, P., Cohen, J., Teresi, J., Marchi, M., & Velez, C. N. (1990). Problems in the measurement of latent variables in structural equations causal models. Applied Psychological Measurement, 14(2), 183–196. doi:10.1177/014662169001400207 Compeau, D. R., & Higgins, C. A. (1995). Application of social cognitive theory to training for computer skills. Information Systems Research, 6(2), 118–143. doi:10.1287/isre.6.2.118 Davis, F. D., Bagozzi, R. P., & Warshaw, P. R. (1989). User acceptance of computer technology: A comparison of two theoretical models. Management Science, 35(8), 982–1003. doi:10.1287/ mnsc.35.8.982 DeVellis, R. F. (1991). Scale development theory and applications. Applied Research Methods Series (Vol. 16). Sage Publications. Diamantopoulos, A. (1999). Export performance measurement: Reflective versus formative indicators. International Marketing Review, 16(6), 444–457. doi:10.1108/02651339910300422 Diamantopoulos, A., & Winklhofer, H. (2001). Index construction with formative indicators: An alternative to scale development. JMR, Journal of Marketing Research, 38(2), 269–277. doi:10.1509/jmkr.38.2.269.18845 Edwards, J. R. (2001). Multidimensional constructs in organizational behaviour research: An integrative analytical framework. Organizational Research Methods, 4(2), 144–192. doi:10.1177/109442810142004 Efron, B., & Tibshirani, R. J. (1993). An introduction to the bootstrap, monographs on statistics and applied probability. New York, NY: Chapman and Hall. Esposito Vinzi, V., Chin, W., Henseler, J., & Wang, H. (2010). Handbook of partial least squares. Heidelberg, Germany: Springer. doi:10.1007/9783-540-32827-8
Falk, R. F., & Miller, N. B. (1992). A primer for soft modeling. Akron, OH: University of Akron Press. Fornell, C., Tellis, G., & Zinkhan, G. (1982). Validity assessment: A structural equation approach using partial last squares. In Walker, B. (Eds.), An assessment of marketing thought and practice (pp. 405–409). Chicago, IL: American Marketing Association. Fornell, C. R. (1982). A second generation of multivariate analysis (Vol. 1). New York, NY: Praeger. Fornell, C. R. (1987). A second generation of multivariate analysis: Classification of methods and implications for marketing research. In Houston, M. J. (Ed.), Review of marketing (pp. 407–450). Chicago, IL: American Marketing Association. Fornell, C. R., & Barclay, D. (1993). Jacknifing: A supplement to Lohmoller’s lvpls program. Ann Arbor, MI: University of Michigan Press. Fornell, C. R., & Bookstein, F. L. (1982). Two structural equation models: LISREL and PLS applied to consumer exit-voice theory. JMR, Journal of Marketing Research, 19(4), 440–452. doi:10.2307/3151718 Fornell, C. R., & Cha, J. (1994). Partial least squares. In Bagozzi, R. P. (Ed.), Advanced methods of marketing research (pp. 52–78). Cambridge, MA: Blackwell. Fornell, C. R., & Larcker, D. F. (1981). Structural equation models with unobservable variables and measurement error. JMR, Journal of Marketing Research, 18(3), 382–388. doi:10.2307/3150980 Fornell, C. R., Lorange, P., & Roos, J. (1990). The cooperative venture formation process: A latent variable structural modeling approach. Management Science, 36(10), 1246–1255. doi:10.1287/ mnsc.36.10.1246
125
An Introduction to Structural Equation Modeling (SEM)
Fornell, C. R., & Yi, Y. (1992). Assumptions of the two-step approach to latent variable modeling. Sociological Methods & Research, 20(3), 291–319. doi:10.1177/0049124192020003001
Hulland, J. S. (1995). Market orientation and market learning systems: An environment-strategyperformance perspective. (Working Paper Series No. 95-09), The University of Western Ontario.
Geisser, S. (1975). The predictive sample reuse method with applications. Journal of the American Statistical Association, 70(350), 320–328. doi:10.2307/2285815
Hulland, J. S. (1999). Use of partial least squares (PLS) in strategic management research: A review of four recent studies. Strategic Management Journal, 20(2), 195–204. doi:10.1002/ (SICI)1097-0266(199902)20:2<195::AIDSMJ13>3.0.CO;2-7
Hair, J., Black, W., Babin, B., Anderson, R., & Tatham, R. (2006). Multivariate data analysis. Upper Saddle River, NJ: Pearson Prentice Hall. Hansmann, K. W., & Ringle, C. M. (2004). Smart PLS manual. Förderverein Industrielles Management an der Universität Hamburg e.V. Hayduk, L. A. (1987). Structural equation modeling with LISREL. Baltimore, MD: Johns Hopkins University Press. Hayduk, L. A. (1996). LISREL issues, debates and strategies. Baltimore, MD: Johns Hopkins University Press. Hendrickson, A., Massey, P. D., & Cronan, T. P. (1993). On the test-retest reliability of perceived usefulness and perceived ease of use scales. Management Information Systems Quarterly, 17(2), 227–230. doi:10.2307/249803 Herting, J. R. (1985). Multiple indicator models using LISREL. In Blalock, H. M. (Ed.), Causal models in the social sciences (pp. 263–319). New York, NY: Aldine. Hirschheim, R. (1985). Information systems epistemology: An historical perspective. In Mumford, E., Hirschheim, R., & Fitzgerald, R. (Eds.), Research methods in Information Systems (pp. 13–18). Amsterdam, The Netherlands: NorthHolland. Howell, J. M., & Higgins, C. A. (1990). Champions of technological innovations. Administrative Science Quarterly, 35(2), 317–341. doi:10.2307/2393393
126
Hulland, J. S., Cho, Y. H., & Lam, S. (1996). Use of causal models in marketing research: A review. International Journal of Research in Marketing, 13(2), 181–197. doi:10.1016/01678116(96)00002-X Hulland, J. S., & Kleinmuntz, D. N. (1994). Factors influencing the use of internal summary evaluations versus external information in choice. Journal of Behavioral Decision Making, 7(2), 79–102. doi:10.1002/bdm.3960070202 Johnson, M. D., & Fornell, C. (1987). The nature and methodological implications of the cognitive representation of products. The Journal of Consumer Research, 14(September), 214–228. doi:10.1086/209107 Jöreskog, K. G., & Sörbom, D. (1982). Recent developments in structural equation modeling. JMR, Journal of Marketing Research, 19(4), 404–416. doi:10.2307/3151714 Jöreskog, K. G., & Sörbom, D. (1988). LISREL7: A guide to the program and applications. Chicago, IL: SPSS Inc. Jöreskog, K. G., & Wold, H. (1982). The ML and PLS techniques for modeling with latent variables: Historical and comparative aspects. In Jöreskog, K. G., & Wold, H. (Eds.), Systems under indirect observation: Causality, structure, prediction (Vol. 1, pp. 263–270). Amsterdam, The Netherlands: North Holland.
An Introduction to Structural Equation Modeling (SEM)
Kenny, D. A. (1979). Correlation and causality. New York, NY: Wiley.
Spector, P. E. (1992). Summated rating scale construction. Newbury Park, CA: Sage Publications.
Lee, D. Y. (2007). The impact of poor performance on risk-taking attitudes: A longitudinal study with a PLS causal modeling approach. Decision Sciences, 28(1), 59–80. doi:10.1111/j.1540-5915.1997. tb01302.x
Stone, M. (1974). Cross-validatory choice and assessment of statistical predictions. Journal of the Royal Statistical Society. Series A (General), 36(2), 111–133.
Lohmoller, J. B. (1989). Latent variable path modeling with partial least squares. Heidelberg, Germany: Physica-Verlag, Lohmoller, J. B. (1982). An overview of latent variables path analysis. Paper presented at the Annual Meeting of the American Educational Research Association, New York. MacCallum, R. C., & Browne, M. W. (1993). The use of causal indicators in covariance structure models: Some practical issues. Psychological Bulletin, 114(3), 533–541. doi:10.1037/00332909.114.3.533 Noonan, R. B. (1979). PLS path modelling with latent variables: Analysing school survey data using partial least squares. Stockholm: Institute of International Education, University of Stockholm. Nunnally, J. C., & Bernstein, I. H. (1994). Pyschometric theory (3rd ed.). New York, NY: McGraw-Hill. Pedhauzur, E. J. (1982). Multiple regression in behavioral research (2nd ed.). New York, NY: Holt, Rinehart and Winston. Rivard, S., & Huff, S. L. (1988). Factors of success for end-user computing. Communications of the ACM, 31(5), 552–561. doi:10.1145/42411.42418 Schumacker, R., & Lomax, R. (2004). A beginner’s guide to structural equation modeling. Mahway, NJ: Lawrence Erlbaum Associates. Smith, J. B., & Barclay, D. W. (1997). The effects of organizational differences and trust on the effectiveness of selling partner relationships. Journal of Marketing, 61(1), 3–21. doi:10.2307/1252186
Thompson, R., Higgins, C., & Howell, J. (1994). Influence of experience on personal computer utilization: Testing a conceptual model. Journal of Management Information Systems, 11(1), 167–187. Wold, H. (1980). Model construction and evaluation when theoretical knowledge is scarce: Theory and application of partial least squares. In Kmenta, J., & Ramsey, J. B. (Eds.), Evaluation of econometric models (pp. 47–74). New York, NY: Academic Press. Wold, H. (1982). Soft modeling: The basic design and some extensions. In Jöreskog, K. G., & Wold, H. (Eds.), Systems under indirect observation (pp. 1–54). Amsterdam, The Netherlands: NorthHolland. Wold, H. (1985). Systems analysis by partial least squares. In Nijkamp, P., Leitner, L., & Wrigley, N. (Eds.), Measuring the unmeasurable (pp. 221–251). Dordrecht, The Netherlands: Marinus Nijhoff. Wold, H. (1989). Introduction to the second generation of multivariate analysis. In Wold, H. (Ed.), Theoretical empiricism (pp. 7–11). New York, NY: Paragon House.
ADDITIONAL READING Byrne, B. (1994). Structural equation modeling with EQS and EQS/Windows. Thousand Oaks: Sage Publications. Diamantopoulos, A., & Siguaw, J. (2000). Introducing LISREL. London: Sage Publications.
127
An Introduction to Structural Equation Modeling (SEM)
Dunn, G., Everitt, B., & Pickles, A. (1993). Modelling covariances and latent variables using EQS. Boca Raton: CRC Press. Esposito Vinzi, V., Chin, W., Henseler, J., & Wang, H. (2010). Handbook of partial least squares. Heidelberg: Springer. doi:10.1007/978-3-54032827-8 Hair, J., Black, W., Babin, B., Anderson, R., & Tatham, R. (2006). Multivariate data analysis. Upper Saddle River, NJ: Pearson Prentice Hall. Kline, R. (2005). Principles and practice of structural equation modeling. New York: The Guildford Press. Raykov, T., & Marcoulides, G. (2006). A first course in structural equation modeling. Mahway, NJ: Lawrence Erlbaum Associates. Schumacker, R., & Lomax, R. (2004). A beginner’s guide to structural equation modeling. Mahway, NJ: Lawrence Erlbaum Associates.
SEM SOFTWARE 1. Covariance-based approaches
AMOS - http://www.spss.com/amos/
EQS - http://www.mvsoft.com/
LISREL - http://www.ssicentral.com/lisrel/
2. Variance-based approaches
PLS-Graph - http://www.plsgraph.com/
SmartP. LS - http://www.smartpls.de/forum/
XLSTAT-PLS - http://www.xlstat.com/en/ products/xlstat-pls/
128
KEY TERMS AND DEFINITIONS Structural Equation Modeling (SEM): A method for representing, estimating and testing a theoretical network of (mostly) linear relations between variables. Partial Least Squares (PLS): A powerful multivariate analysis technique with roots in path analysis. Covariance-Based SEM: Techniques estimate path coefficients and loadings by minimizing the difference between observed and predicted variance-covariance matrices. Soft Modeling: Applies when theoretical knowledge is scarce and stringent distribution assumptions are not applicable. Soft modeling can be viewed as a method of estimating the likelihood of an event given information about other events. Multicollinearity: Multicollinearity reflects the extent to a single independent variable is highly correlated with a set of other independent variables. As multicollinearity increases it becomes more problematic to parcel out the effect of any single construct owing to their interrelationships. Formative Indicators: A formative construct is one where the observed variables are assumed to cause a latent variable (the construct is expressed as a function of the observed variables). Reflective Indicators: A reflective construct is one where the variables are expressed as a function of the construct (the observed variables are assumed to be caused by the latent variable). Boostrap and Jacknife: Approaches to validating a theoretical model by drawing a large number of subsamples and estimating models for each subsample.
An Introduction to Structural Equation Modeling (SEM)
ENDNOTES 1
2
LISREL requires the use of large sample sizes to ensure correct estimates of the unknown parameters and their standard errors. Although there is no widely cited minimum, numerous authors refer to a minimum requirement of 200 cases (Haydul, 1987; Jöreskog & Sörbom, 1996). An exogenous construct is an independent variable and is shown as predicting or ‘causing’ an endogenous construct, an dependent variable (Hair et al. 2006).
3
When pointing the arrows from the circle to the squares, the block becomes outwardly directed. This means that the circle is estimated in a fashion similar to that of a first principal component i.e., factor loadings are identified that represent the predictable, common variance among the manifest variables. When the arrows are pointed from the squares to the circle, the block is inner-directed. In this case the circle is estimated as a regressed variate and factor weights are identified.
129
130
Chapter 7
Using Experimental Research to Investigate Students’ Satisfaction with Online Learning Art W. Bangert Montana State University, USA
ABSTRACT The use of experimental research in higher education settings for investigating the effectiveness of technology-supported instructional innovations in K-12 and higher education settings is fairly limited. The implementation of the No Child Left Behind Act (NCLB) of 2001 has renewed the emphasis on the use of experimental research for establishing evidence to support the effectiveness of instructional interventions and other school-based programs in K-12 and higher education contexts. This chapter discusses the most common experimental designs and threats to internal validity of experimental procedures that must be controlled to ensure that the interventions or programs under investigation are responsible for changes in the dependent variables of interest. A study by Bangert (2008) is used to illustrate procedures for conducting experimental research, controlling potential threats to internal validity and reporting results that communicate both practical and statistical significance.
INTRODUCTION The first empirical studies untaken to investigate factors related to student satisfaction with online learning environments were primarily qualitative DOI: 10.4018/978-1-60960-615-2.ch007
in nature (Garrison & Arbaugh, 2007) Transcripts from computer mediated conferencing were coded and themes were identified to describe the variables related to student satisfaction and their online learning experiences (e.g., Gunawardena, Lowe, & Anderson,1997). However, the enormous growth in students enrolling in online courses
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
Using Experimental Research to Investigate Students’ Satisfaction with Online Learning
over the past five years has created greater opportunities for researchers to conduct large scale quantitative studies that offer more generalizable outcomes about online student satisfaction and learning. Results from empirically-based quantitative research designed to investigate the influence of student satisfaction with web-based learning contexts have been regularly reported in the literature (e.g., Arbaugh, 2008; Dziuban, Moskal, Brophy, & Shea, 2007). Although, these quantitative studies are considered empirical in nature, they use correlational research methods that provide results that can not be interpreted to verify cause and effect relationships. However, outcomes from correlational studies are important for identifying relationships that can be further investigated by the use of experiments to establish causal relationships. A renewed emphasis on the use of experimental research outcomes to support the efficacy of K-12 instructional interventions and other school-based programs emerged from funding requirements specified by the No Child Left Behind Act (NCLB) in 2001 (U.S. Congress, 2001). Institutions are now required to evaluate the effectiveness of funded programs through the use of scientifically based evidence collected from experiment research studies (Feuer, Towne, & Shavelson, 2002). For example, the U. S. Department of Education’s Institute for Education Sciences (IES) (2008) clearly specify in their “What Works Clearing House: Procedures and Standards Handbook” that grant proposals must incorporate the use of randomized experiments to gain the highest ratings toward providing “strong evidence” of intervention or program effectiveness. The National Research Council (NRC) has for some time suggested that, as in the field of medicine, randomized field trials be used to evaluate the efficacy of intervention-based programs that are funded by the No Child Left Behind Act and other federal programs (NRC, 2002). The importance of using experimental research to evaluate program outcomes has greatly influenced the methods that
higher education faculty use in collaboration with K-12 schools to test the effectiveness of instructional interventions and school-based programs funded by U.S. Department of Education. Although, experimental methods have been commonly used to conduct research in higher education settings, their use for investigating technology-supported instructional innovations has been fairly limited (Ross, Morrison & Lowther, 2005). Experimental research has advantages over descriptive or correlational studies because of the controls used to reduce the influence of extraneous variables on dependent variables used to measure program outcomes. The rigor of experimental research designs is dependent on the procedures used to minimize threats to internal validity. Campbell & Cook (1966) have defined internal validity as “the approximate validity with which we can infer that a relationship is causal (p 37). That is, can we conclude that the intervention or independent variable caused a change or had an effect on a measurement or dependent variable of interest? For example, an experimental study could be designed to demonstrate improved student satisfaction and learning during online course discussions that incorporate frequent instructor interactions when compared minimal instructor interactions. If the internal validity for this experiment is sound, then significant increases in student satisfaction and learning for online courses that incorporate frequent instructor interactions is interpreted as being “caused” by instructor interactions rather other factors such course design, student interactions or type of instructor-student exchange. Experimental designs are classified according to the varying levels of control procedures they use to ensure internal validity. Although there are variations in the manner in which experimental designs are implemented, the most common configurations are discussed in the next section.
131
Using Experimental Research to Investigate Students’ Satisfaction with Online Learning
AN OVERVIEW OF EXPERIMENTAL RESEARCH DESIGNS True Experiments True experiments are considered the most rigorous classification of all experimental designs. Their strength for inferring causation is based on the use of randomization to reduce systematic error when assigning participants to comparison groups. According to Feuer, Towne, & Shavelson, (2002) “when well–specified causal hypotheses can be formulated and randomization to treatment and control conditions is ethical and feasible, a randomized experiment is the best method for estimating effects” (p. 8). The previous example describing an experiment to test the effects of instructor instructions on student satisfaction and learning would be considered a true experimental design if online students are randomly assigned to comparisons groups. In this case, both groups are then subjected to identical course conditions, but different treatments conditions (frequent instructor interactions vs. minimal instructor interactions during discussions). Of course, this example could be extended to compare more than two groups if another level of independent variable such as “no instructor interactions” was added. It should be noted that random selection and random assignment are two different procedures designed to minimize experimental error. Random selection is a technique used to randomly select a sample to help ensure that participants represent a larger population. Random assignment to groups, on the other hand, is a procedure designed to establish equality among groups on variables that may influence research outcomes. Both procedures are undertaken to eliminate the influence of bias caused by variables outside of the experiment on a dependent. Efforts to eliminate the influence of extraneous variables that negatively bias results is important for validly interpreting the effects of the independent variable under investigation.
132
Repeated Measures Within subject designs are conceptually opposite of between group designs in that each participant in the research receives the same treatment. A common example is the administration of a pretest and posttest to determine if an intervention has caused a change in some type of skill or behavior. Repeated measures designs are appealing because the use of one experimental reduces the number of required participants. In addition, error due to individual variability that occurs for two independent group comparisons is reduced because in a repeated measures design each participant serves as his or her own control. The disadvantage is that observations are independent and carry over effects or pretest sensitization may influence the effects of the treatment on the dependent variable. An example to illustrate the use of the repeated measure design could be to test the effects of using Web 2.0 applications on student satisfaction. The instructor might decide to use Google Docs, Webspiration and Adbobe Buzzword as instructional strategies and assess student satisfaction before and after their use to determine which Web 2.0 application students favored most.
Quasi-Experiments Quasi-experiments are used in educational settings when it is not feasible or advisable to randomly assign participants to different experimental groups. Although, the strength of evidence used to evaluate the effects of an independent variable on a dependent variable is reduced, quasi-experiments can still provide evidence for causality if the approach is strong (Gliner & Morgan, 2000). The strongest type of quasi-experiments are those which randomly assign classrooms or intact groups to experimental conditions. Testing the effects of the frequency of online instructor interactions on student satisfaction would be considered a quasiexperiment if the instructor randomly assigned the “frequent interactions” and “minimal interaction”
Using Experimental Research to Investigate Students’ Satisfaction with Online Learning
treatments to different intact, online classrooms. The advantage of quasi-experiments is that they avoid the logistical difficulties inherent in attempting to randomly assign students to treatment groups. However, internal validity is reduced with the use of quasi-experiments because there is more chance that prior differences exist between intact groups on variables such as attitudes toward online courses, technology or course content.
Causal Comparative Causal comparative research differs from true experiments and quasi-experimental designs because the researcher cannot randomly assign participants to groups and the independent or treatment variable is not manipulated. The independent variable in this case is a behavior or characteristic thought to influence some other behavior or characteristic that differentiates existing groups. A dependent variable then selected to measure the behavior or characteristic hypothesized to be responsible for causing existing group differences. For example, Dobbs, Waide & Carmen (2009) used a causal comparative design to investigate the effects of students’ perceptions of their online course delivery. Experience with online learning was defined as completing “at least one online course” as compared to “no online coursework”. Dobbs et. al. used results from self-administered surveys to compare the perceptions of 180 university students enrolled in either a face-to-face or online criminal justice course. Results from this study indicated that regardless of delivery mode, students with more online course experience rated their perceptions of the quality of online learning significantly higher than students who had less online experience. This example is considered a causal-comparative design because individuals were not randomly to groups. The goal of this experiment was to determine if the independent variable, experience with online learning, had an effect on student perceptions of the quality of their learning experience. These results however,
do not imply causation, but rather suggest that a group attribute (“experience” or “no experience with online learning”) had an effect on students’ perceptions of the quality of their online learning experiences. Causal relationships are also difficult to establish for the Dobbs et al.’s outcomes because other preexisting group characteristics such as instructor interactions, time of day or interest in course content unless controlled for may also have influenced students’ perceptions of the quality of their online learning experiences.
THREATS TO THE INTERNAL VALIDITY OF EXPERIMENTAL STUDIES Threats to internal validity are uncontrolled factors that cause error in research outcomes and prevent sound causal interpretations of the experimental results. Researchers need to be aware of the potential threats to internal validity when planning and conducting experiments designed to investigate factors hypothesized to have causal effects on student satisfaction and learning for online learning contexts. More importantly, researchers should understand the procedures they can use to reduce potential sources of experimental error that endanger the validity of their research outcomes. Eight major factors have been identified that have the potential to cause erroneous interpretations of outcomes from experiments (Campbell & Stanley, 1966; Cook & Campbell, 1979). These eight sources of error can be classified into two major categories based on the methods used to control their influence on experimental results. Gliner & Morgan (2000) suggest that threats to internal validity are best characterized as those that can best be controlled by using experimental procedures that ensure (1) equivalence of groups on participant characteristics and (2) control of extraneous (experience or environmental) variables.
133
Using Experimental Research to Investigate Students’ Satisfaction with Online Learning
INTERNAL VALIDITY AND GROUP EQUIVALENCE Differential Selection of Participants The manner in which participants are assigned to groups may contribute to invalid conclusions about the effect of an intervention. For example, randomly assigning intact groups to experimental conditions does nothing to eliminate preexisting group differences. This validity threat is most commonly associated with quasi-experimental and causal comparative designs that compare classroom or groups of individuals defined by a unique characteristic to test the effectiveness of an intervention. A statistical technique recommended for controlling potential sources of error associated with experimental designs that use intact groups is Analysis of Covariance (ANCOVA). This statistical control uses data from existing sources or collected during the experiment to measure one or more characteristics thought to influence the outcome variable other then the treatment. The ANCOVA analysis removes the influence of these purposely measured extraneous variables from the dependent variable used for groups comparisons. Another method is to “block” or group individuals within treatment and control groups by variables that may have significant effects on a dependent variable. For example, when investigating student’s perceptions of online coursework, online experience could be controlled for by dividing students into “high” (5 or more courses), “moderate” (4 to 2 courses) and “low” (1 course or less) experience groups. A factorial analysis of variance (ANOVA) could be conducted to determine if perceptions differ across all students or if differences just occur among certain sub-groups or “blocks” of students. Using matched samples (or matching) is an alternative control to blocking individuals by group characteristics. This method involves matching intact control and treatment groups on participant characteristics such gender, age,
134
experience, ability, socioeconomic status, etc. In this case matching is used to help verify that changes in a dependent variable are due to treatment effects are not the result of differences in preexisting group characteristics. Prior research studies can be used to guide researchers toward important participant characteristics that should be controlled for so that more valid conclusions can be drawn.
Statistical Regression Sometimes the purpose of a study is to investigate interventions that may benefit a particular group who score substantially below average or above average on some outcome measure. For example, a researcher might be interested in researching the effects of using Web 2.0 collaboration tools to improve students’ satisfaction with online coursework. The researcher could use a repeated measures design that involves a pretest and posttest to determine if students with the poorest attitudes toward online coursework evidenced significant improvement in their perceptions. However, laws of probability dictate that students who score very low on pretest measures by chance alone will score higher on posttest measures. That is, students’ scores will have a tendency to move or regress toward the mean. One method for reducing artificial changes due to statistical regression is to use also use a control group that does not receive the Web 2.0 treatment. Both groups can be pretested and post-tested to compare the magnitude of change in their perceptions of online coursework or the pretest can be used as a covariate to equalize preexisting group differences related to perceptions of online coursework using an ANCOVA analysis.
Experimental Mortality Attrition or loss of participants between or among groups may bias experimental results. Participants in the treatment group may withdraw because of the perceived or real burden of participation.
Using Experimental Research to Investigate Students’ Satisfaction with Online Learning
Control group participants may grow weary of the requirements to complete assessments that provide comparison data. Researchers can control for mortality by oversampling participants to ensure you have the required numbers for adequate statistical power and to avoid biased outcomes. Mortality can create conditions similar to those encountered with biased sampling.
Instrumentation Several sources of error may occur when researchers use instruments in an inconsistent manner to collect data. The use of pretest and posttest measures that vary in difficulty, assess slightly different constructs or are administered in an unstandardized procedures introduce artificial gains or declines for measurement variables. The use of pilot testing to standardardize test administration procedures is one method for experimenter error and increases the reliability results that reflect changes in dependent variables when making between group and within group comparisons. The use of equated pretests and posttests is another method that can be used for controlling the error associated with the inconsistent measurement of dependent variables. Researchers who develop their own data collection procedures and instruments must rely on existing research, existing instruments and theory to guide their efforts to accurately operationalize the constructs they intend to investigate using experimental research.
THREATS TO INTERNAL VALIDITY AND EXTRANEOUS VARIABLES Environmental Events Environmental events such as history, treatment diffusion and maturation are those threats that researchers may have less control over. History refers events occurring during experiments that influence results other than the intended treatment
condition. For example, in the context of the Web 2.0 experiment, an outside environmental event that could adversely influence student perceptions of online coursework might be network problems causing a slow connection or a new link posted on an institution’s courseroom homepage highlighting the advantages of using Web 2.0 applications. Efforts to control for environmental events that might adversely influence experimental outcomes involve good methodological planning; however, control for all potential environmental threats is impossible.
Treatment Diffusion Another external event that can cause researchers to draw erroneous conclusions from experimental studies is the issue of treatment diffusion. This threat occurs when students assigned to a control group are inadvertently exposed to the experimental group’s treatment. Exposure of the control group to some or all of the treatment may equalize outcomes between groups thus preventing an accurate representation of the robustness of treatment effects. The experiment designed to test the effects of Web 2.0 use on students perceptions of online coursework might be adversely influenced when control group students are enrolled in different online courses where the instructor uses one or more web-based interactive tools to deliver instruction. Again, good methodological planning and the use of standardized procedures for establishing and maintaining experimental group conditions is the best way to control for treatment diffusion.
Maturation Maturation, like history and treatment diffusion issues, is related to some outside influence that causes change in a result other than the treatment of interest. Unintended influences due to psychological rather than physical changes are more likely to influence experiments conducted in university
135
Using Experimental Research to Investigate Students’ Satisfaction with Online Learning
settings due to the ages and maturity levels of students. A development or maturational issue related to the Web 2.0 experiment might be that students in both groups will likely demonstrate increased learning over time masking the effects of the Web 2.0 treatment. Successful learning experiences for online students might also inadvertently impact results because studies suggest that perceived learning is positively related to student satisfaction with web-based courses (Swan & Shih, 2005). In this case, student perceptions could be measured prior to the experiment and used as a covariate to control for preexisting group differences on a post-experiment measure. In addition, this pretest measure could also be used form blocks of students with similar perceptions that could be compared using a factorial ANOVA to control for differences in perceptions that characterize groups prior to the experiment.
Testing Effects Research situations that use a pretest – posttest experimental design risk the chance of introducing error in outcomes by alerting participants to post test content or to behave in ways intended by the intervention. Changes in test scores may change because of pretest sensitization and not because of the treatment. Possible carry over effects from pretesting that have the potential to influence posttest results are clues about content and question format as well as socially accepted attitudes. The use of a posttest measure that contains items that are different yet equated to pretest is one method to reduce that chances that change in learning or other behaviors are not the result of familiarity with content due to pretest exposure.
Interactions with Participant Assignment This risk of introducing experimental error can also occur when two competing threats to inter-
136
nal validity interact with one another. Although, participants are randomly assigned to groups, there may be unidentified group characteristics that differentiate them from one another before the intervention is applied to one or more treatment groups. For example, in the case of a the Web 2.0 experiment described earlier, students in the control group, unbeknowst to the researcher, may have participated in Web 2.0 activities during their freshman orientation course. The fact that the control group students already had exposure to Web 2.0 applications tools might dilute the outcomes of the experiment. Random assignment does not always guarantee that systematic error is completely eliminated suggesting that researchers should make efforts to collect information about group characteristics to determine their equivalence prior to proceeding with a study. For this situation, the use of blocking, ANCOVA or a combination of both methods may help to control for this threat to internal validity.
STATISTICAL ANALYSIS OF EXPERIMENTAL DATA Both parametric and non-parametric statistics can be used for analyzing group data to determine causal relationships. The use of parametric analyses are dependent on group data used for comparisons meeting the assumptions of normality and homogeneity of variances. The most common parametric tests used for making group comparisons include independent samples t-tests and One Way Analysis of Variance (ANOVA). ANOVA designs are used for comparing results from more than two groups. However, one group experimental designs are commonly used analyses that involve the use of paired samples t-test to make pretest and posttest comparisons and oneway repeated measures ANOVA for with group comparisons that involve more than two repeated measurements.
Using Experimental Research to Investigate Students’ Satisfaction with Online Learning
Table 1. Parametric and non parametric statistical analysis for quantitative data Parametric
Non parametric
Independent Samples t-test
Mann-Whitney U
Paired Samples t-test
Wilcoxin Signed-Ranks Test
One Way ANOVA
Kruskal-Wallis Test
One Way Repeated Measures test
Friedman Test Frequency Data Bionomical Test (2 groups comparisons) Chi Square Goodness of Fit Test (multi-group comparisons)
Nonparametric statistical procedures can be used for comparative analyses when the assumptions for parametric tests are not met. The advantage of these statistical tests is that they use ranked data to reduce error caused by skewed data and heterogeneous group variances. However, nonparametric tests are considered less powerful because ranked data is less sensitive for detecting changes when data is ranked for analysis. The nonparametric equivalent of the independent samples t-test is the Mann-Whitney U test while the Kruskal-Wallis analysis is the corresponding nonparametric test for the One Way ANOVA. The Wilcoxin Signed-Ranks test is the nonparametric counterpart for parametric paired-samples t-test used for a repeated measures comparison. Discussion transcripts and other text-based data found in computer-mediated communications are often categorized into quantitative units for analysis using quantitative content analysis (QCA) (Rourke & Anderson, 2004). Online text from discussions, emails and other sources are segmented into units (e.g., one message), each unit is classified according to a pre-defined category, and the frequencies for each category are recorded. Two group comparisons of frequency data can be made using the binomial while frequency data from several groups can be compared using the Chi Square Goodness of Fit test (see Table 1).
REPORTING STATISTICAL RESULTS Statistical results for analyses that use parametric t-tests or ANOVAs to make group comparisons are reported using t or F statistics accompanied by the probability of rejecting the null hypothesis. For example, making the decision that a result is either significant or non-significant is based on the reported Type I error rate (or probability of significance). For example, considering the context of the Web 2.0 experiment described earlier, a researcher might compare treatment and control groups’ average ratings of perceptions of their online coursework. Higher ratings for this study are associated with more positive perceptions of online coursework. A significant finding for group differences might be written as follows, “Online students using Web 2.0 applications for online instruction rated their perceptions of online coursework significantly higher than online students who did not use Web 2.0 applications as an instructional strategy, t(58) = 4.21, p = .03. For this result the chance of making a Type I error or mistake due to sampling error when making the decision that there is a significant difference between group ratings would be 3% (p = .03). If the a priori alpha level was .05, or the chance of making a type I error was set at 5%, then the researcher would make the decision to reject that null hypothesis and interpret the result as “students using Web 2.0 on average rated their perceptions of online courses more positively than those online
137
Using Experimental Research to Investigate Students’ Satisfaction with Online Learning
students who did not participate in Web 2.0 activities”. It is important to emphasize that the chance of making a Type I error is related to the efforts that researchers undertake to control risks to that threaten the validity of experimental outcomes. In the case of a three group experiment, a one way ANOVA could be used to identify significant group differences. Let’s say that the researcher expands the study to investigate the effects of more than two types of Web 2.0 applications on students’ perceptions of online coursework. Three comparison groups could be formed with students assigned to the use of either Wikispaces, Google Docs or Adobe Buzzword to increase collaboration and understanding of course content. If the ANOVA analysis yield significant differences, results could be reported in the following manner, “Results from a One Way Analysis of Variance (ANOVA) found significant group differences on average ratings of online coursework among students enrolled in courses using different Web 2.0 applications, F(2, 88) = 6.54, p = .01. Although an F ratio is reported for an ANOVA analysis, the probability of rejecting the null hypothesis or making the decision that the result is significant is interpreted in the same manner as the significance level or probabilities for results of t-tests. This result indicates that there is only a 1% chance of making a Type I error when saying that the students’ average ratings differ significantly by the type of Web 2.0 application used in their online courses. If the ANOVA results are significant, then post hoc tests are conducted to determine which Web 2.0 groups rated their perceptions of online courses significantly higher than the other Web 2.0 groups. Results from the post hoc tests could be reported as follows, “Post hoc follow-up tests found that students using a Wikispaces rated their perceptions of online coursework significantly higher than students who used Google docs or Adbobe Buzzword for online instruction. However, significant differences were not found for
138
the Google Docs and Adobe Buzzword groups when compared on average student ratings of online coursework. Quantitative Content Analysis (QCA) techniques have been used frequently by researchers to investigate factors that influence online learning contexts (e.g., Garrison, Anderson & Archer, 2001 Bangert, 2008). Most studies using QCA, however, have limited their results to descriptive statistics that report frequencies and percentages. The binomial test can be used for comparing two categories when data is reported in the form of proportions. The researcher investigating the effects of Web 2.0 use on students’ perceptions of online coursework might extend the experiment to examine levels of reflective thought occurring during the use of these web-based tools. For example, the use of Google Docs and Adobe Buzzword could be compared using the binomial test to determine if one application promotes a significantly greater number of reflective postings than the other. A significant finding might be written as follows, “Students using Adobe Buzzword posted a significantly greater percentage of postings classified at the highest level of reflective thought than the students who used Google Docs, z = 2.44, p = .03.” Although, a different statistic is reported, the same inferences can be made based on the probability of significance (Type I error rate) as when interpreting results for a parametric analysis. While the binomial test is often used for comparing frequency data for two groups, the Chi Square Goodness of Fit test is used for comparing frequency data for more than two groups.
REPORTING PRACTICAL AND STATISTICAL SIGNIFICANCE Reporting the probability of a statistically significant result from an experimental comparison is necessary but not sufficient for making sound
Using Experimental Research to Investigate Students’ Satisfaction with Online Learning
decisions about causal relationships. In addition, to the probability of significance, “best-practice” suggests that the effect sizes for statistical comparisons are also reported (Wilkinson, 1999). Effect sizes (ES) are essential pieces of information for readers of research because they offer a metric for evaluating the practical “significance” or importance of research outcomes (Kirk, 1994). Vacha-Haase & Thompson (2004) describe an effect size as a statistic that quantifies the degree to which sample results deviate from the expectations specified in the null hypothesis. Effect size values can be used to evaluate the “practical significance” of the research results (Kirk, 1994). In fact, many journals will not publish manuscripts when results reporting statistical significance are not accompanied by effect sizes (Vacha-Haase & Thompson, 2004). The reason effect size reporting is important is because small probabilities of committing a Type I errors (or making the decision that a result is significant) maybe an artifact of an overly large sample size rather than a meaningful difference or relationship. When conducting quantitative studies, researchers should attempt to use adequate sample sizes to reduce bias in research outcomes caused by sampling error. However, very large sample sizes reduce sampling error substantially sometimes causing very small differences or relationships to become significant. Effect size statistics, which are not dependent on sample size, are important for helping consumers of research studies interpret the practical importance of significant results reported in the literature. Effect size metrics are available for almost all statistical results reported (see Cohen, 1988). In addition to reporting effect sizes, some researchers have suggested that confidence intervals for statistical results be reported (Thompson, 2007). However, the calculation of confidence intervals can be somewhat complex when compared to effect size measures discussed in the next section.
EFFECT SIZE MEASURES FOR EXPERIMENTAL COMPARISONS Cohen’s d is a common effect size measure for results of parametric t-tests while Eta square (h 2 ) should be reported for ANOVA designs. Cohen’s d is calculated simply by forming a ratio of the difference between group means divided by the pooled or average standard deviation for both groups. d=
X1 − X 2 Pooled SD
Eta square (h 2 ) on the other hand is the variability due to group differences divided by the total model variability. h2 =
SS group variance SS total variance
The effect size d, communicates the magnitude of group differences in terms of standard deviation units. When more than two groups are compared, the effect size measure, Eta Square, is used to describe the amount of variation in a dependent variable explained by group differences. Most statistical packages report Eta Square (h 2 ) or a similar statistic for ANOVA designs where as Cohen’s d requires computation by hand. Many statistical software packages, such as SPSS, do not offer effect size results for nonparametric statistics. However, the z statistic which is commonly provided for the Mann-Whitney U, Wilcoxin Signed-Ranks and the KruskalWallis ANOVA tests can be easily converted to r for reporting the practical significance of these nonparametic tests (Morgan, Leech, Gloeckner & Barrett, 2007). As is the case with Eta Square and Cohen’s d, r can be viewed as an effect size statistic that describes the influence of the independent variable on a dependent variable. In
139
Using Experimental Research to Investigate Students’ Satisfaction with Online Learning
Table 2. Effect size interpretation (Cohen, 1988) Statistical Analysis
Effect Size
Parametric t-tests
d
Small ≤.20
Medium .21-.79
Large ≥.80
ANOVA
η
≤.01
.02-.06
≥.14
Nonparametric t-tests
r
≤.10
.11-30
≥.50
The binomial test
h
≤.49
.50-.79
≥.80
Chi Square Goodness of Fit test
Vdf = 1
≤.30
.31-.50
≥.51
2
a
Vdf = 2
≤.21
.20-.35
≥.36
Vdf = 3
≤.17
.18-.29
≥.30
Table Note: df = degrees of freedom
a
this case, the larger the r, the more important the practical significance is for a result. r=
z N
The effect size for the binomial test used to test the significance of differences for proportions is calculated from the simple difference between proportions and represented by the symbol h (see Cohen, 1988). However, the calculation is not straight forward because constant differences do not occur across the scale of proportions (Rossi, 1985). However, Rossi (1985) provides a convenient table for transforming proportions to equal units and calculating h by simply subtracting their transformed values. Finally, Cramer’s V is the effect size statistic reported for the Chi Square Goodness of Fit test that is used to analyze differences in proportions for more than two groups. Fortunately, most statistical software packages report Cramer’s V eliminating the need for any hand calculations. Table 2 summarizes the common effect size measures that are used for reporting the magnitude of practical significance for a result. Consistent reporting of effect sizes in the literature also offers important information that other researchers can use when planning similar research that is powerful enough for detecting treatment effects that have been identified as
140
important (Kieffer, Reese, R.J., & Thompson, 2001). In addition, when effect sizes are reported, meta-analysis of research can be conducted to determine an overall treatment effect for similar studies using similar outcome measures. This type of larger comparison provides researchers as well as consumers of research with insights regarding the consistency or generalizability of intervention effectiveness across various settings.
AN EXPERIEMENTAL RESEARCH EXAMPLE INVESTIGATING THE COMMUNITY OF INQUIRY MODEL Research related to computer mediated conferencing suggests that instructor and student interactions are important elements for enhancing student satisfaction with online coursework (e.g., Lee, H.-J., & Rha, I., 2009; Swan & Shih, 2005). The Community of Inquiry (CoI) model proposed by Garrison, Anderson and, Archer’s (2000) (see Figure 1) suggests that the quality of teacher and student interactions are necessary elements for establishing online communities of critical inquiry that are characterized by deep and reflective discourse. Studies investigating the use of CoI model as a framework for guiding the development of online courses suggests that students enrolled in these courses perceive their
Using Experimental Research to Investigate Students’ Satisfaction with Online Learning
Figure 1. Community of inquiry model
educational experiences as worthwhile and valuable (Arbaugh, 2008). The elements that interact to form the CoI framework are: Social presence, Teaching presence and Cognitive Presence. Social presence is considered an online student’s sense of being and belonging in a course (Picciano, 2003) while, teaching presence refers to the “methods” that instructors use to create quality online instructional experiences that support and sustain reflective discourse. Cognitive presence is described by Garrison et al. as the extent to which learners are able to construct and confirm meaning through sustained reflection and discourse during online discussions. The quality of these elements and their interactions are hypothesized to positively influence students’ satisfaction with their online educational experiences. I conducted experimental research with graduate students enrolled in an introductory online statistics course to investigate the effects of social presence and teaching presence on cognitive presence. My study, “The Influence of
Social Presence and Teaching Presence on the Quality of Online Critical Inquiry” is used as an example to illustrate how experimental research can be used to investigate the effects of instructional factors on student perceptions of their online learning environment.
Research Questions Posed 1. Will online learning communities where social presence is supported produce a significantly greater percent of discussion postings classified at the resolution level (most complex level of reflection) of cognitive presence as compared to communities of online learners where social presence is not supported? 2. Will online learning communities where both social presence and teaching presence are supported produce a significantly greater percent of discussion postings classified at the resolution level of cognitive presence as
141
Using Experimental Research to Investigate Students’ Satisfaction with Online Learning
compared to communities of online learners where only social presence is supported? 3. Will online learning communities where both social presence and teaching presence are supported produce a significantly greater percent of discussion postings classified at the resolution level of cognitive presence as compared to communities of online learners where social presence and teaching presence are not supported? The majority of internal validity issues are related to the design of the experiment, research procedures and data collection methods. The following section provides an example of how to describe the experimental design, procedures and data collection methods while at the same time detailing how threats to internal validity are controlled.
METHODS Design The design used for this study was a true experimental design where 33 students enrolled in an online version of a graduate-level, educational statistics course were randomly assigned to either a control, social presence, or social presence combined with teaching presence experimental discussion group condition. The independent variable for this study was the group assignment to either a control or CoI element discussion group while the dependent variable was the number of reflective postings that each group posted. The complexity of reflective postings for each group were classified according to Garrison, Archer & Anderson’s (2001) framework for operationalizing the cognitive presence construct. They describe cognitive presence as the extent to which learners are able to construct and confirm meaning through sustained reflection and discourse. The
142
four phases of cognitive presence, ranging from the least to most complex phases, as defined by Garrison et al. (2000) are: (1) a “Triggering” event promotes initial discourse related to issues or dilemma under consideration; (2) the “Exploration” phase is characterized by discussions based on learners’ experiences and include brainstorming, questioning and exchanging of information; (3) more complex interactions occur when discussion postings reach the “Integration” phase where meaning is constructed from ideas generated in the exploratory stage and learners identify a range of possible solutions and evaluate of their suitability, (4) the most complex phase, “Resolution” is characterized by discourse that focuses on consensus building. The validity of solutions identified during the “Integration” phase are often tested by discussing applications of problem resolutions to learners’ real-life experiences.
Participants The participants for this study were graduate students (n = 33) enrolled in a graduate-level, educational statistics course at mid-sized university in the western United States. An introductory educational statistics course is a required course for all students seeking a master’s degree in programs offered through the college of Education, Health and Human Development. The program majors of students enrolled in the course included: educational leadership, curriculum and instruction, adult and higher education, counseling, and family consumer science. Twenty-five females and 8 males were enrolled in the course. The thirtythree students taking this course were randomly assigned to one of three experimental discussion groups: social presence only (n=10), social presence combined with teaching presence (n = 12) and a control group (n = 11) where no types of online interactions were intentionally supported. Each experimental group was assigned to collaboratively solve and discuss the same problem-
Using Experimental Research to Investigate Students’ Satisfaction with Online Learning
based learning task designed to teach the concept of statistical significance. Group equivalence was established by comparing mean GRE scores and mean scores students earned on a quiz assessing their knowledge and understanding of concepts related statistical significance. One Way ANOVAs found that there were no significant differences between groups when comparisons were made across treatment groups on average quantitative GRE scores (F(2,32)=2.00, p = .16) or average verbal GRE scores (F(2,32)=.1.51, p = .238). Students were also assessed with an instructor-created, 14-item quiz to determine their basic understanding of the concept of statistical significance. Results from a One Way Analysis of Variance (ANOVA) also found that the experimental groups did not differ significantly on average quiz scores, F(2, 32) = 1.41, p = .260.
Social Presence Condition Students assigned to the social presence group were required to participate in two weeks of collaborative team-building activities facilitated by the instructor prior to the start of discussions related to a Problem Based Learning (PBL) task. During the second week of team-building the social presence group students were required to engage in a discussion activity focused on strategizing how their group would collaborate to complete the problem-based learning task. Instructor interaction in this group was characterized by what Swan & Shih (2005) refer to as a “restrained” form of teaching presence where the instructor only provided comments to learners about directions for completing the discussion assignment and other course functions. The instructor did not facilitate the PBL discussion discourse or supply direct instructional activities that would promote inquiry through reflective exchanges.
Combined Social Presence and Teaching Condition The social presence combined with teaching presence experimental group participated in the same collaborative team-building activities during the same two week period as the social presence group. However, the instructor interacted with this group by engaging in direct instructional activities and actively facilitated the PBL discussion activity discourse. Once the PBL task was assigned, the social presence combined with teaching presence group were given two weeks to complete and submit their collaborative group response. This group was supplied with frequent and consistent instructor interactions that clarified misconceptions, posed reflective questions, promoted deeper understanding prompted by differing perspectives and modeled responses representing complex cognitive processing. In addition, learners where provided with direct instruction through email when necessary to clarify concepts and their application.
Control Condition Learners assigned to the control group condition were also given two weeks to complete the PBL task. However, control group students were required to complete the task individually. Once individual projects were completed, control group students submitted their individual responses and participated in the PBL task discussion activity. This group did not participate in any team-building activities prior to the PBL assignment nor were they supplied with any instructor interactions in the form of reflective questions, model responses, or clarifications that would create quality teacher presence.
143
Using Experimental Research to Investigate Students’ Satisfaction with Online Learning
DATA COLLECTION AND ANALYSIS Data analysis for this study was guided by recommendations suggested by Rourke & Anderson (2004) for conducting quantitative content analysis (QCA) of transcripts from computer mediated conferencing. Following procedures used by Henri (1992) and Garrison et al. (2001), message-level units of discussion corresponding to what one participant posted into one discussion thread, on one occasion were coded. The four categories that were used to classify each message level into one of four phases of cognitive presence were as follows: (1) Triggering Event; (2) Exploration, (3) Integration; and (4) Resolution. Two raters were trained to code messagelevel units by applying Garrison et al’s (2001) four phases of cognitive presence to a sample of messages from a prior discussion held within the same statistics course. During the actual coding of messages for this study, there where several occasions when messages from transcripts contained indicators of multiple phases of cognitive presence (i.e., words or phrases rather than complete thoughts). When this situation was encountered, messages that did not clearly represent a higher phase of reflective inquiry were “coded down” (i.e. to the less complex dimension of cognitive presence). Messages that contained evidence of multiple phases of reflection (i.e., clearly expressed thoughts) were coded-up to the highest phase of cognitive presence found in the discussion posting.
CONTROLLING THREATS TO INTERNAL VALIDITY FOR THE COI STUDY The major threats to internal validity that were addressed in this study included ensuring group equivalence, treatment diffusion and the appropriate use of instruments. Although students were randomly assigned to groups, reviewers for this
144
paper felt that participants with higher or lower math ability might still be unequally distributed among groups. Group equivalence on math ability was addressed by comparing the quantitative GRE scores across groups using a One Way ANOVA. Results from this comparisons found that the group GRE quantitative means did not differ significantly. Another method that could have been to used to control for preexisting group differences would be to match students across groups by quantitative ability based on their GRE and quiz scores. Treatment diffusion is another legitimate threat to internal validity that could have compromised the outcome of this study. All of the students were enrolled in the same section of an introductory educational statistics course. However, care was taken to keep treatment groups isolated from one another by assigning them to separate courseroom discussions where they were incapable of interacting with students from other groups. Almost all of the students lived off campus and were not able to interact with one another outside of the courseroom. The instructor also did not work with more than one student from several groups when providing direct instruction related to clarifying assignment activities and procedures The validity of the data collection method was initially established by Garrison, Archer & Anderson’s (2001) research which revised and piloted the cognitive presence coding scheme. In addition, they also reported procedures to establish reliability of the coding procedures used to identify frequencies of cognitive presence. For this study both the researcher and a graduate student practiced coding the discussion transcripts from other course assignments. Once the coding procedures were standardized, the discussion transcripts based on the problem-based learning activity were coded by both the researcher and the graduate student for each experimental group. Interrater reliability for the coding procedures was established by comparing the number of coding agreements and disagreements divided by the
Using Experimental Research to Investigate Students’ Satisfaction with Online Learning
Table 3. Frequency and percent of postings by phase of cognitive presence for each experimental group Phases of Cognitive Presence Experimental Group
n
Triggering
a
Exploration
Integration
Resolution
f
%
f
%
f
%
f
%
Control
32
3
9.4
3
9.4
24
75
2
6.2
Social Presence
32
7
21.8
0
0
23
72
2
6.2
Social/Teacher Presence
49
3
6
5
10
28
57
13
27
Total
113
13
11.5
8
7
75
66.5
17
15
Note. f is the frequency of messages and % is the precent of messages; n = the number of messages posted by each group a
total number of agreements. In addition, when disagreements occurred, both the researcher and graduate student discussed the rationale for their coding until consensus was reached.
REPORTING RESULTS FOR AN EXPERIMENTAL STUDY Key components for reporting results of significance tests include the type of test used, the probability of making a Type I error and the effect size. Additionally, it is important to specify the a priori alpha level that will be used to determine significance and a guide for interpreting the reported effect sizes. z tests for proportions were used to compare the percentage of postings at each phase of cognitive presence. The results of this study are used to illustrate how experimental comparisons for technology-supported interventions can be conducted and reported.
Results Section for the COI Study The frequencies and percentages of messagelevel units classified by phase of inquiry for each experimental group are presented in Table 3. z-tests were used to determine if the percent of messages coded for each phase of cognitive presence differed significantly by experimental group (Triola, 2007). The alpha level for these
comparisons was set at .05. Effect sizes for the difference in proportions (h) were also reported for both significant and nonsignificant results (Cohen, 1988). Cohen (1988) suggests that the magnitude of effect size for h values be interpreted according to the following criteria: 0 to .49 (small), .50 to .79 (moderate) and, .80 or greater (large). The greatest percent of postings across all three discussion groups were classified in the “Integration” (66.5%) phase of cognitive presence while the lowest percent of messages for all three groups was coded in the “Exploration” (8%) phase. The social presence combined with teaching presence group was found to post a significantly greater percent of messages classified in the resolution category, the highest level of cognitive presence, than either the social presence group (z = 2.44, p < .05, h = .60) or the control group (z = 2.44, p < .05, h = .60). The social presence combined with teaching presence group also posted a significantly greater percentage of messages classified in the “Integration” category in comparison to the social presence group (z = 2.18, p < .05, h = .50) and the control group (z = 2.22, p > .05, h = .50). A example of how the data might be presented is shown in Table 3. The social presence group posted the largest percent of messages classified at the “Triggering Events” level of reflection (21.8%). Group comparisons found that the percent of social presence only group “triggering” messages were
145
Using Experimental Research to Investigate Students’ Satisfaction with Online Learning
significantly higher than the percent posted by social presence combined with teaching presence group (z = -1.79, p < .05, h = .48). However, no significant difference was found between the social presence only group and the control group (z = -1.41, p > .05, h = .38). Comparisons between experimental groups for the percent of messages coded at the “Exploration” phase of cognitive presence were unremarkable. Significant differences in the percent of exploration messages were not found when comparing the social presence combined with teaching presence to the control group (z = .15, p > .05, h = .04). Students assigned to the social presence only group did not post any messages coded at the at the exploration level of cognitive presence preventing comparisons with this group to the other two experimental groups.
REPORTING RESULTS FROM THE COI STUDY WERE REPORTED TO INFORM CONSUMERS DECISIONS Statistical analyses should be reported in a manner so that consumers of the research who have at least a conceptual understanding of the research methods and statistical techniques used in studies can draw conclusions from experimental investigations (Bangert & Baumberger, 2005). Results for z tests included the z statistic as well as the probability of significance or Type I error rate as well as the effect size h. The introduction to the results section reported the alpha level that was set to make the decision about significance of results. For this experiment, the alpha level was set at .05 and the introduction section for the results section provided the values for interpreting the magnitude of the effect sizes (h) reported for each comparison. The probability of significance for each comparison was reported as either p > .05 or p < .05. The reason for using this convention rather than reporting the actual probability is because the z statistics were calculated by hand and the z table was used to estimate if the z value was
146
greater or less than the .05 alpha level specified for each comparison. For most other statistical comparisons, statistical computer programs such as SPSS will provide in their outputs the actual significance levels or probabilities of a committing a Type I error eliminating the need to use the “greater” or “less” than symbols. However, SPSS and other programs will provide results where “p=.000” for highly significant results. When this is the case, the researcher should report the result as “p <.001” because it is theoretically impossible to obtain a probability of a Type I error equal to 0.
FUTURE RESEARCH DIRECTIONS Results from empirical studies suggest that the use of the Community of Inquiry framework to guide the development of online courses and programs has positive influences on perceptions of student satisfaction with their Web-based learning experiences. Although quantitative studies indicate a positive relationship between the constructs that define the CoI model and student satisfaction, few if any studies have used experimental methods for this purpose. Past research has been more focused on verifying measurement constructs to assess the individual elements of the CoI model. The majority of these empirical studies have reported relationships between the CoI elements and student satisfaction rather than attempting to verify experimentally the effects of the CoI elements individually and collectively on critical inquiry and student satisfaction. To more thoroughly build on the CoI research base, more experimental studies should be completed to generalize the findings from current research outcomes and to verify newly proposed strategies for supporting and sustaining quality social discourse necessary for supporting reflective and meaningful learner interactions. In addition, more experimental studies should be undertaken to identify the factors that are the most influential for promoting optimal interactions between the
Using Experimental Research to Investigate Students’ Satisfaction with Online Learning
social, teaching and reflective discourse proposed by the CoI framework that Garrison et al suggest promote, deep, durable learning and quality online learning experiences.
CONCLUSION Experimental methods provide a rich opportunity for researchers to identify and verify important factors that currently have been hypothesized as contributing to perceptions of student satisfaction with online learning. However, results obtained from this type of research are only as good as the controls that researchers undertake to thwart threats to internal validity. Accurate descriptions of the experimental procedures and reporting of statistical results will allow consumers of the literature to make sound decisions about the generalizability of experimental research outcomes to their own unique contexts.
REFERENCES Arbaugh, J. B. (2008). Does the community of inquiry framework predict outcomes in online MBA courses? Bangert, A. W. (2008). The influence of teaching presence and social presence on the quality of online critical inquiry. Journal of Computing in Higher Education, 20(1), 34–61. Bangert, A. W., & Baumberger. (2005). Research designs and statistical techniques used in the Journal of Counseling & Development, 1990 -2001. Journal of Counseling and Development, 83, 480–487. Campbell, D. T., & Stanley, J. C. (1966). Experimental and quasi-experimental designs for research. Chicago, IL: Rand McNally.
Cohen, J. (1988). Statistical power analysis for the behavioral sciences (2nd ed.). Hillsdale, NJ: Erlbaum. Cook, T. D., & Campbell, D. T. (1979). Quasiexperimentation: Design and analysis issues for field settings. Boston, MA: Houghton-Mifflin. Dobbs, R. R., Waid, C. A., & del Carmen, A. (2009). Students’ perceptions of online courses: The effect of online course experience. The Quarterly Review of Distance Education, 10(1), 9–26. Dziuban, C., Moskal, P., Brophy, J., & Shea, P. (2007). Student satisfaction with asynchronous learning. Journal of Asynchronous Learning Networks, 11(1), 87–95. Feuer, M. J., Towne, L., & Shavelson, R. J. (2002). Scientific culture and educational research. Educational Researcher, 31(8), 4–14. doi:10.3102/0013189X031008004 Garrison, D. R., Anderson, T., & Archer, W. (2001). Critical thinking, cognitive presence, and computer conferencing in distance education. American Journal of Distance Education, 15(1), 7–23. doi:10.1080/08923640109527071 Garrison, D. R., & Arbaugh, J. B. (2007). Researching the community of inquiry framework: Review, issues, and future directions. The Internet and Higher Education, 10, 157–172. doi:10.1016/j.iheduc.2007.04.001 Gliner, J. A., & Morgan, G. A. (2000). Research methods in applied settings: An integrated approach to design and analysis. Mahwah, NJ: Erlbaum. Gunawardena, C. N., Lowe, C. A., & Anderson, T. (1997). Analysis of a global online debate and the development of an interaction model for examining the social construction of knowledge in computer conferencing. Journal of Educational Computing Research, 17(4), 397–431. doi:10.2190/7MQV-X9UJ-C7Q3-NRAG
147
Using Experimental Research to Investigate Students’ Satisfaction with Online Learning
Henri, F. (1992).Computer conferencing and content analysis. In A. Kaye (Ed.), Collaborative learning through computer conferencing: The Najaden papers (pp 117-136). Berlin, Germany: Spring-Verlag.
Rossi, J. S. (1985). Tables of effect size for z score tests of differences proportions and correlation coefficients. Educational and Psychological Measurement, 45, 737–743. doi:10.1177/0013164485454004
Institute for Education Sciences. (2008). What works clearinghouse: Procedures and standards handbook (version 2.0). Retrieved from http:// ies.ed.gov/ncee/wwc/references/ idocviewer/Doc. aspx?docId=19&tocId=11
Rourke, L., & Anderson, T. (2004). Validity in quantitative content analysis. Educational Technology Research and Development, 52(1), 5–18. doi:10.1007/BF02504769
Kieffer, K. M., Reese, R. J., & Thompson, B. (2001). Statistical techniques employed in AERJ and JCP articles from 1988 to 1997: A methodological review. Journal of Experimental Education, 69, 280–309. doi:10.1080/00220970109599489 Kirk, R. E. (1994). Experimental design: Procedures for behavioral sciences (3rd ed.). Belmont, CA: Wadsworth. Lee, H.-J., & Rha, I. (2009). Influence of structure and interaction on student achievement and satisfaction in Web-based distance learning. Journal of Educational Technology & Society, 12(4), 372–382. National Research Council. (2002). Scientific research in education. In Shavelson, R. J., & Towne, L. (Eds.), Committee on scientific principles for educational research. Washington, DC: National Academy Press. Picciano, A. G. (2003). Beyond student perceptions: Issues of interaction, presence and performance in an online course. Journal of Asynchronous Learning Networks, 6(1), 21–40. Ross, S. M., Morrison, G. R., & Lowther, D. L. (2005). Using experimental methods in higher education research. Journal of Computing in Higher Education, 16(4), 39–64. doi:10.1007/ BF02961474
148
Swan, K., & Shih, L. (2005). On the nature and development of social presence in online course discussions. Journal of Asynchronous Learning Networks, 9(3), 115–136. Thompson, B. (2007). Effect sizes, confidence intervals and confidence intervals for effect sizes. Psychology in the Schools, 44(5), 423–432. doi:10.1002/pits.20234 U.S. Congress. (2001). No Child Left Behind Act of 2001. (Pub.L.No. 107-110,115 Stat. 1425). Vacha-Haase, T., & Thompson, B. (2004). How to estimate and interpret various effect sizes. Journal of Counseling Psychology, 51, 473–481. doi:10.1037/0022-0167.51.4.473 Wilkinson, L. (1999). Statistical methods in psychology journals: Guidelines and explanations. The American Psychologist, 54(9), 594–604. doi:10.1037/0003-066X.54.8.594
KEY TERMS AND DEFINITIONS Effect Size: A statistic that quantifies the degree to which sample results deviate from the expectations specified in the null hypothesis (Vacha-Haase & Thompson, 2004). Internal Validity: “The approximate validity with which we can infer that a relationship is causal” (Campbell & Cook, 1966, p 37).
149
Chapter 8
Student Performance in E-Learning Environments: An Empirical Analysis Through Data Mining Constanta-Nicoleta Bodea Academy of Economic Studies, Romania Vasile Bodea Academy of Economic Studies, Romania Ion Gh. Roşca Academy of Economic Studies, Romania Radu Mogos Academy of Economic Studies, Romania Maria-Iuliana Dascalu Academy of Economic Studies, Romania
ABSTRACT The aim of this chapter is to explore the application of data mining for analyzing performance and satisfaction of the students enrolled in an online two-year master degree programme in project management. This programme is delivered by the Academy of Economic Studies, the biggest Romanian university in economics and business administration in parallel, as an online programme and as a traditional one. The main data sources for the mining process are the survey made for gathering students’ opinions, the operational database with the students’ records and data regarding students activities recorded by the elearning platform are. More than 180 students have responded, and more than 150 distinct characteristics/ variable per student were identified. Due the large number of variables data mining is a recommended approach to analysis this data. Clustering, classification, and association rules were employed in order to identify the factor explaining students’ performance and satisfaction, and the relationship between them. The results are very encouraging and suggest several future developments. DOI: 10.4018/978-1-60960-615-2.ch008
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
Student Performance in E-Learning Environments
INTRODUCTION The need for continuing adaptation of workforce and the demand for more flexible ways of acquiring competencies are the factors which motivate the use of e-learning. E-learning is recognized as a fundamental tool for a lifelong learning society. Therefore, e-learning become a strategic vector in development of knowledge-based economy (Charpentier, Lafrance & Paquette, 2006). In the same time, the student interest in online courses has increased over time. As a result, more and more courses, even entire programs, are now delivered by means of information technology or offered online. This change in education & training industry has as a consequence the intensification of research on e-learning. Researchers want to discover students’ preferences for various tools and e-learning platforms, the relationships between online learning and different learning styles, or the factors affecting student performance and satisfaction in the online environments (McFarland & Hamilton 2006). The research in e-learning domain is facilitate by the extensive amount of data stored by the e-learning systems, Most of these systems have the ability to collect data about the student activities, tracking navigational pathways through educational resources, time spent on various topics, or number of visits. Also, the e-learning systems capture data about the amount and type of resources usage. This data often are the basis of the research. One way to better understand of data is data mining. By data mining, it is possible to discover patterns to be used in predicting student behavior and efficient allocation of resources. (Romero & Ventura, 2007) present the main findings of an educational data mining survey covering the period 1995-2005. (Baker & Yacef, 2009) made another survey covering the latest data mining approach in education domain. Both surveys show that the number of data mining appli-
150
cations in education is constantly increasing, and they cover a lot of educational processes such as: enrollment management, academic performance, web-based education, retention. Many case studies on data mining techniques in education are cited in the literature (Luan, 2002), (Ma & al, 2000), (Barros & Verdejo, 2000), (Ranjan & Malik, 2007). These case studies aim at predictions of student performance, mainly through cluster analysis to identify relevant types of students. (Delavari & al, 2005) proposed a model for the application of data mining in higher education. (Shyamala & Rajagopalan, 2006) developed a model to find similar patterns from the data gathered and to make predication about students’ performance. (Luan, Zhai, Chen, Chow, Chang & Zhao, 2004) presented different case studies on educational data mining. One of these studies intended to highlight factors that determine the academic success of first-year students. The methods used are classification and regression trees and neural networks. There were generated decision trees, and association rules. A sensitivity analysis was performed to analyze factors. Variables considered were demographic variables and performance indicators before college. By this analysis can be achieved overall average prediction in the year. The analysis carried out by two classes of methods showed that the most important factors for academic success in first year of college are SAT scores (average of high school equivalent) and position in the rankings achieved on average in high school. Another case study was developed at Cleveland State University. The purpose of this study was granted financial aid to assist students to explore the effects that might be changing the rules on admission and to promote data mining techniques to build models to improve student services. Research objective is to identify key indicators that predict the best students stay in college and ultimately graduating. The data used are available at registration. Data used in the study covers the
Student Performance in E-Learning Environments
period 1998-2003. Data quality was not too high because of missing values anomalies (graduates without specialized freshman with an excessive number of credit points, change majors, a large number of undecided (without specialization). Were analyzed only first-year students (7,209 students) and transfer students from other schools lasting 4 years (9544). Variables used are: average high school ranking position in the average high school completed, age at entry into college, ACT score (like SAT), race, gender, average first half, the number of declared majors, change specialization, college graduate, the distance between university and home. The analysis revealed that the most important factors in explaining performance are: the average high school age, score ACT, the number of declared majors. Due to limited access to data analysis could not include the financial aspects (financial aid for students, student accounts) and information about behavioral patterns of students). (Haddawy & Hien, 2006) presents a decision support system for improving the admission of the Asian Institute of Technology (Asian Institute of Technology). The Institute has a large number of international students (over 2000), especially at master and doctorate coming from universities for which the institution has sufficient information to classify them and thus ensure a proper interpretation for the previous preparation of students. Therefore, the decision was done to develop an analysis by data mining for predicting student success in relation to school / college graduation and other information from the registration file. 10 variables were used, namely: age, sex, marital status, nationality, English language test result, the institute where he obtained his diploma earlier specialization, which he obtained his diploma media earlier field of study, type of degree Previous (MA, PhD). As specialization and field of study, a significant number of values were performed prior to a group working with the class label. Attribute values nationality and graduated from the institute have been replaced
with meaningful values. Thus, developed countries were used rankings by the World Bank report on gross domestic product (LIC, LMC, NOC, CAB, and UMC). Instead of graduating to name the institute previously used its position (from 0-10) in the rankings of institutions designed on the correlation between the result of previously admitted students in the institute of origin and results in the Asian Institute of Technology. Analysis has been used a Bayesian network. Data used cover the period 2003-2006. They used data of 1386 students in masters programs and 302 graduate students. Currently working on improving the analysis by identifying similar cases (students previously enrolled candidate profile similar to that measured) in order to make a prediction of performance through case-based reasoning. (Waiyamai, 2004) presents a case study in data mining techniques are applied to assist students in choosing specialties. It is considered that the choice of specialization is an important factor in ensuring academic performance. Although it is an important, students are not sufficiently supported in carrying options. By analyzing data about students (including data on courses that have chosen) are trying to determine for each student the best specialist. Initially he tried to generate a classification tree to enable the most appropriate specialization for a student profile in its report (associated features). Classification tree generated this but a low precision, the main causes of the large number of specialties for which a student may choose variations in the number of options for specialization. The method was not considered suitable for generating a single solution (recommended specialization), often sought when a list of recommended majors, possibly ordered by a factor of compatibility. For this reason it opted for a generation of another model that is a tree data to enable compatibility between a specialist and a student. With this tree can predict whether a student is right or not for a particular specialization. The classification is done so in two classes: right / wrong. Success in a particular specialty
151
Student Performance in E-Learning Environments
Table 1. AES education & training portfolio for 2009-2010 AES Education &Training Programs
Total Number
Online Programs
Bachelor’s degree in Economics
13
0
Continuing education (Trainings)
75
16
Scientific Master’s degree
29
0
Professional Master’s degree
56
10
International Master’s degree Doctor’s degree Total
9
0
10
0
192
26
much more than the course delivery mechanism. (Zapalska, Shao & Shao, 2003) found that students are generally satisfied with the discussion board feature yet dissatisfied with the chat room feature. Therefore, a simple reliance on one or the other of these features in an online class would likely affect student satisfaction. Further, as noted by (McDonald, Dorn, & McDonald, 2004), online students must be proficient readers in order to be successful.
THE RESEARCH CONTEXT is considered the completion of normal duration (4 years) and rank in the top 40% to a certain degree. This model provides a better prediction (percentage of correct predictions is 80%) and provides for the same student to obtain a set of recommendations for majors. Regarding the students performance in the online courses, the majority of the studies show that performance is the same, for a course taken traditionally and online one. Even sometimes students enrolled in traditional class felt their course was more effective in developing knowledge and skills, no difference between the groups was found on the comprehensive final examination (Priluck, 2004). Improvement of student achievement has always been one of the main goals of education. Regarding the student satisfaction in the online courses, despite online learning multiple advantages, such as its being learner centered, offering location flexibility, and providing archival capability for knowledge reuse and sharing, all these advantages seems to be insufficient to satisfy most students. Many studies reveal that students are more satisfied with a traditional class experience than with an online class. One reason for student dissatisfaction could be the perception that they have to work harder online and they perceive that the professor isn’t fulfilling his or her responsibility (Piccoli, Ahmad & Ives, 2001). It must be noted that student satisfaction is derived from
152
The Academy of Economic Studies (AES) is a national university. The education and training programs are delivered based on a public budget, coming from the Education and Research Ministry, and also on its own resources. It also has freedom and autonomy according the law. AES is considered a remarkable representative of superior economic studies in Romania. The university has 10 faculties, over 49.000 students and course attendants; 35500 - graduation cycle, 9400 - master programs, 2500 - PhD enrolled, over 1600 in academic schools and post-graduation courses and 2000 didactical staff and technical and administrative personnel. In 2009-2010, AES has delivered more than 192 education & training programs, 26 delivered as online programs (see Table 1). AES promotes the economic values, the administrative and judicial ones, together with the science and universal culture values. Its commitment is to achieve excellence in economic education, and so to ensure the next generation of economists and administrative specialist is fully prepared for success on the workforce market. AES delivers 26 online programmes. More than 5000 master students attend the online master programmes. The infrastructure for all these online programmes is shown in Figure 1. Several e-learning platforms are used, but Moodle is preferred by the majority of programmes organizers.
Student Performance in E-Learning Environments
Figure 1. Online infrastructure for AES educational programmes
Moodle (http://moodle.org/) is a Course Management System (CMS), also known as a Learning Management System (LMS) or a Virtual Learning Environment (VLE).
PURPOSE OF THE RESEARCH The research objective is to identify the main factors affecting the student’s performance and
153
Student Performance in E-Learning Environments
satisfaction in e-learning environment. The main research questions are: •
•
•
•
•
•
•
Which are the most important factors affecting the student performance in e-learning environments? How overall student satisfaction influences the students’ e-learning performance and which are the specific situations? How knowledge background, graduated faculty and student activity on the platform affect the students’ performance and in which cases? Which is the relation between evaluation relevancy and performance in e-learning (relation based on communication involvement, communication efficiency, online activities involvement and the teacher impact) and how can be described this relation? How the time spent in front of the computer influence the activity on the platform and the performance in e-learning environments (for the second year)? Which are the most important factors affecting the students’ performance regarding the platform satisfaction? Which are the association rules between attributes that describe the way in which the students’ initial requirements were met? Which is the difference between the situation when general association rules were generated and the one where class association rules are mined?
business administration in parallel, as an online programme and as a traditional one. A survey was made to collect students’ opinions about the online programme, in general, and specifically, regarding the e-learning platform, the educational resources available online, the communication with trainers, the assessment, the practical approach of different disciplines. The questionnaire was developed in order to collect a large amount of interesting information. The questionnaire is included in the appendix of the chapter. The questionnaire is structured into five main parts: • • • • •
Both open and multiple-choice questions were addressed. The questionnaire was given to 400 students enrolled in a two-year master programme of project management. 52 students enrolled in their 1st year of study and 129 students in their last year responded. The data included in filled questionnaires was processed and recorded in an excel database, for further analysis. The performance measures are defined based on the following elements:
THE RESEARCH METHODOLOGY
•
The research is done using the data gathering from students enrolled in an online master degree programme in project management. This programme is delivered by the Academy of Economic Studies, the biggest Romanian university in economics and
•
154
questions regarding organization aspects and technical platform; trainee’s needs (motivation to participate into an online education programme); trainee’s commitment towards the project management educational programme; syllabus and expectations from training providers; trainers’ involvement;
The grades at all 14 disciplines scheduled in the first academic year and 6 disciplines included in the curricula for the second year. The practical scores at all 14 disciplines scheduled in the first academic year and 6 disciplines included in the curricula for the second year.
Student Performance in E-Learning Environments
• •
Number of failures at first academic year exams Number of failures at second year exams
This data, except project scores was taken from the operational database, administered at university level. The practical scores, as part of grades, were available on the e-learning platform, the professors reporting on the platform all assessment components, such as: tests, project, and final exam, at all disciplines. The performance measures are: • • • • •
•
Grade Point Average in the first academic year (GPA_I) Grade Point Average in the second academic year (GPA_II) Practical Score Average in first academic year (PSA_I) Practical Score Average in the second academic year (PSA_II) Aggregated Performance in first academic year (EVALUATION_PERFORMANCE_ CLASS_I) Aggregated Performance in the second academic year (EVALUATION_ PERFORMANCE_CLASS_I)
EVALUATION_PERFORMANCE_ CLASS_I is defined as follows: • •
•
•
•
E VA L U AT I O N _ P E R F O R M A N C E _ CLASS_I = 0, if GPA_I is between 6 and 7 E VA L U AT I O N _ P E R F O R M A N C E _ CLASS_I = 1, if GPA_I is between 7.01 and 8 and over 3 failed exams E VA L U AT I O N _ P E R F O R M A N C E _ CLASS_I = 2, if GPA_I is between 7.01 and 8 and less than 2 failed exams E VA L U AT I O N _ P E R F O R M A N C E _ CLASS_I = 3, if GPA_I is over 8.01 and less than 2 failed exams E VA L U AT I O N _ P E R F O R M A N C E _ CLASS_II is similarly defined, using GPA
and number of failures from the second academic year. Some additional data regarding student activities on the virtual space was gathering based on the statistics provided by the e-learning platform. The Moodle platform offers analytical and graphical statistics regarding the student activity, such as: the number of platform access, per day, month, semester, year, entire programme, the number of new subject initiated by a student, the time spent on the virtual space, the area of virtual space visited by the student. Figure 2 presents some of the facilities offered by Moodle. As we can see, some students have a uniform platform accessing pattern, others access platform only during the exam period, and others did not use the platform, using additional communication solutions (e-mail groups) in order to be informed. Based on the information provided by the elearning platform, the following characteristics were defined for each student: • • • • • •
Maximum_access_number_Class, Platform_access_total_number_Class Uniformity_Class Maximum_access _number per semester Number of subject _initiatives Platform_access_total _number,
In order to identify the main factors affecting the student’s performance and satisfaction in elearning environment, the data mining techniques were chosen. The decision was made based on the characteristics of these techniques, such us: understandable, easily computable, visual, interactive, working directly on database environments. There are many accessible data mining techniques which could be used by people having not a specific training in data analysis for immediate and lasting benefit. Many researches underline these advantages of the data mining techniques over the statistical ones: “It was reported that models constructed
155
Student Performance in E-Learning Environments
Figure 2 Statistics regarding the student activities provided by the e-learning platform
through data mining of inductive exercise were better in terms of prediction accuracy than those constructed through statistical measures with hypothetic-deductive approach. As data mining models have relatively higher degree of accuracy, we use such tools to develop predictive data mining model for students’ performance in Indian educational system” (Ramaswami &. Bhaskaran, 2010). We don’t neglect the efficiency of statistics in determining correlation between factors which influence students’ performance, that’s why we use it as a starting point for more advanced data min-
156
ing analysis. The blending between data mining and statistics is strongly suggested in the literature (Friedman, 1997). For obtaining a successful data mining processing, a thorough understanding of data through statistics is also a must. Other necessary ingredients are: a big enough amount of data and an initial data modeling. Figure 3 presents this three main data sources used in the data mining process. During the Data preparation and Data modeling phases, only data related to master students in second year were considered. The reason is there is not so many
Student Performance in E-Learning Environments
Figure 3. Data flow in data mining process
data on performance for freshmen. Or data mining is focused on performance analysis, so there is not useful to include all students in data mining analysis. The collected data will be used on a further longitudinal analysis of students’ satisfaction, when it will be possible to analyze how perceived satisfaction is changing during the programme, when students pass into the second year. A preliminary data analysis was done in a traditional manner, as part of a larger and more detailed process of data exploratory analysis, organized as a data mining project. This kind of project is structured according the DM – CRISP methodology into the following six phases: a. Requirements understanding: this initial phase focuses on understanding the project objectives and requirements from a business perspective, then converting this knowledge into a data mining problem definition and a preliminary plan designed to achieve the objectives. b. Data understanding: the data understanding phase starts with an initial data collection and proceeds with activities in order to get familiar with the data, to identify data quality problems, to discover first insights into the data or to detect interesting subsets to form hypotheses for hidden information.
c. Data preparation: the data preparation phase covers all activities to construct the final dataset from the initial raw data. Data preparation tasks are likely to be performed multiple times and not in any prescribed order. Tasks include table, record and attribute selection as well as transformation and cleaning of data for modeling tools. d. Modeling: in this phase, various modeling techniques are selected and applied and their parameters are calibrated to optimal values. Typically, there are several techniques for the same data mining problem type. Some techniques have specific requirements on the form of data. Therefore, stepping back to the data preparation phase is often necessary. e. Evaluation: at this stage in the project you have built model/models that appear to have high quality from a data analysis perspective. A key objective is to determine if there is some important business issue that has not been sufficiently considered. At the end of this phase, a decision on the use of the data mining results should be reached. f. Deployment: creation of the model is generally not the end of the project. Even if the purpose of the model is to increase knowledge of the data, the knowledge gained will need to be organized and presented in
157
Student Performance in E-Learning Environments
a way that the customer can use it. It often involves applying “live” models within an organization’s decision making processes.
•
We will present in this chapter some of the main phases of data mining process.
DATA UNDERSTANDING A special section was designed to reveal information about students which might affect their performance in the online programme and also their preferences in the learning process. The respondents’ distribution was analyzed from the following point of views (Figure 4): •
•
•
•
158
practical experience in project management: 43% were juniors in project management activities (less than 3 years of experience), 21% had between 3 and 5 years of experience, just 2% of them were seniors (over 5 years of working in project management) and the rest of respondents didn’t specify their level of expertise; experience in project management educational programmes (whether they attended or not other project management courses): just 13% of them were engaged in previous forms of project management education (occasional workshops, trainings at work, shorter project management courses organized by well-known institutions), but 61% of them were already in the second year of project management master; age: most of our respondents were below 30 years old (50% of them were between 22-25, 31% were between 26-30, 13% were between 31-35 and just 6% were above 35 years old); monthly income: 93% of online students have a considerable enough monthly income;
•
daily activity in virtual environments: 23% spend between 5 and 8 hours in average per day and 32% spend over 8 hours per day in front of the computer; besides work, they use the virtual environments for documentation, communication, forums, e-mailing, personal or professional improving, entertainment; their field of activity: most of our respondents come from IT (62%), others come from telecommunications, banking, research, education, commercial, logistics, financial consulting, constructions; the most common jobs among our students are: software developer, IT analyst, business analyst, consultant, engineer, researcher, team leader, project manager;
Preliminary Data Analysis “Organization and Technical Platform” section; Trainees have a stronger preference for online education: 93% of the interrogated students answered that the online master programme in project management should continue in the following years, despite its drawbacks. They justified their option by listing some of the reasons for the emergence of e-learning: a way of overcome time constraints, a good surrogate of traditional learning, an efficient method of learning, flexibility, easy access to information, interactivity. Just 2% considered the online platform to be inaccessible, from the technical point of view. Moreover, 14% of them said that the platform is not appropriate, because it hasn’t all the technical facilities needed by the users. The high level of computer skills is proved also by the fact that most students don’t consider a technical training session necessary prior to using the platform. Still, they consider the face-to-face meetings organized occasionally to be useful enough. Other researches revealed that face-to-face approaches and online approaches have equal significance in educational process. The balance might tilt towards
Student Performance in E-Learning Environments
Figure 4. The respondents’ distribution
one approach depending on the subject to be taught (Kelly et al., 2007). Although most students have a passive attitude towards online discussions, they consider forums an important ingredient of a successful e-learning platform, because they can easily extract information from what others say. Most students think that flexibility in learning environments helps them to obtain better results (see Figure 5c) they like to choose their learning path or, at least, their deadlines for preparing the homework. This finding is also highlighted by
Monolescu & Schifter: they explain the request for flexibility and interaction by the need to feel included. (Monolescu & Schifter, 2000) When asked about what they liked in MIP organization and technical platform, we could identify 4 directions: •
communication facilities: forums, teacher also interact with students, existence of a community;
159
Figure 5. Preliminary analysis of the responses
Student Performance in E-Learning Environments
160
Student Performance in E-Learning Environments
•
•
•
information accessibility: you can find our very easily the news, there is a so-called announcement area in the platform and also a library area; the exams are marked in a public calendar; good structure: activities are divided in courses and seminars, although the courses are online, the exams are face-to-face and organized in week-ends (so, the traditional methods are very well blended into the modern organization), the technical platform is user oriented, flexibility: students can learn from everywhere, students establish their own time to consult the materials, which are available 24 hours;
When asked about what they dislike in MIP organization and technical platform, most students identified: • •
•
late or lack of feed-back from the instructors/ teachers; draw-back in the technical facilities: lack of privacy when uploading personal projects into the platform, too complicated design, lack of video presentations in the platform, lack of online exams; organization problems: exams are organized in a too short period, changes are made in a too short notice time, but final grades are announced too late;
Concluding, one can notice that some aspects depend on students’ preferences (the way in which exams are taken or the degree of interaction with the teachers), where others are much more objective. We notice a strong need of human interaction, although it’s an online programme: students want more explanations from their teachers, quick answers and an active community. In Figure 5d, one can see the rate of student-instructor communication efficiency, on students’ opinion.
“Trainee’s Needs” Section; The students of a project management educational programme proved to be strong willed: the majority of them enrolled in the programme with the precise scope of improving their knowledge of project management (see Figure 5e). The main reason for attending an online programme and not a classical one was, for 91% of them, the lack of time: they have a full-time job. Other reasons mentioned by students were: legal reasons (they are enrolled in other education programmes, which aren’t online), they need a diploma and they think that an online master is easier than a classical one, they aren’t from the city in which the master is organized or they receive recommendations to follow these project management courses. Although just 7% of the students attended previous online courses, it is interesting that all of them talked very highly about those courses, so the e-learning approaches are very well received. This observation is also strengthened by other researches (Tallent-Runnels et al., 2005), (Young & Norgard, 2006). “Trainee’s Commitment” Section; Students don’t consider initial selection of candidates to be important for the quality of education: 47% of the ones questioned said that anyone should be allowed in an online master programme of project management, 38% of them stated that an initial check through CV and document revision would be necessary and just 15% of the students admitted that an online test is the proper method for acceptance (see Figure 5f). Although they don’t have any demands on prerequisites for the courses, they seem to be concerned to improving their knowledge by reading more than it is required in “class”: when asked if they searched and used other resources than the one indicated by their instructors, 86 of them answered affirmatively, “often” and “very often”, 60 of them were neutral and 26 answered “sometimes” and “never”. Most of our respondents think that the knowledge learned at master can be applied at work (see Figure 5g). Unfortunately, 62% of them consider that they don’t have the support of
161
Student Performance in E-Learning Environments
their co-workers for improving their knowledge through a master programme. This can affect their learning performance. When asked to list the elements which affect their performance in class, most of them named the degree to which the programme fulfils their knowledge needs. Other elements were: quality of the education platform, the feed-back received from the instructors, the communication to their peers, their desire to learn and to improve professionally, their teachers’ behavior and attitude. The students consider that they achieve a better performance when working in teams and for the right reason: to develop their collaboration skills and for preparing for real-life projects (see Figure 5h). The majority of them consider that involvement in online activities will bring them a better grade: so, in their opinion, commitment is rewarded. “Syllabus and Expectations from Training Providers” Section; The students are very preoccupied about the accreditation issue: 165 of the subjects said that it was very important for them the fact that their master was a certified programme and 154 militated for the importance of master provider’s accreditation (see Figure 5a and b). Our survey proved that students considered word documents to be most helpful (81 of respondents). The next preferred formats for learning documents are slides and e-books. 22% of respondents don’t care about the course format: a significantly amount of this kind of individuals are in the 2nd year of master. According to other answers, we can’t say that 2nd year students are less interested in the programme than their younger colleagues, so a more plausible explanation can be the fact that 2nd year students are already used to all kind of electronic materials: the power of adaption to an electronic environment increases gradually (Arbaugh, 2004). The project seems to be the most suitable form of evaluation, in our respondents’ opinions: 73% of them consider the projects developed at various disciplines to be relevant for their training and
162
35% of them pointed out the projects to be the best form of evaluation. Other types of evaluation which seemed relevant to the students were multiple-choice tests (33%) or peer reviews (27%). Although they are convince of the importance of projects and homework, most of them agreed that just ongoing evaluations aren’t enough: they prefer either final exams, or a mix form of evaluation (summative evaluation and ongoing tests). Most students consider that evaluations reflect their knowledge on a certain topic. Thematic content and requirements are very important for the students and they consider that the resources provided at the course (course support, project models/ case studies, templates) are sufficient enough for sparking their interests about the addressed issues. When asked about their favorite subject and reasons for their preference, we noticed the following trends: •
• • • •
personal inclination towards a particular subject or a particular type of teaching style; quality of courses material; knowledge discovery; high applicability of course; previous knowledge on that subject;
The main reasons for disliking a certain subject were connected to the low course presentation and content: too much information, too complicated formula, a low degree of coherence of the information presented, the course required more prerequisites then announced. The dissatisfaction registered at an online programme resembles to the ones reported at a traditional educational programme. “Trainers’ Involvement” Section; The instructor’s role is very important to the over-all quality of the education programme, according to 86% of the respondents. This finding is also supported by previous researches (Guardado & Shi, 2007). The main responsibility of the instructor is to offer support for learning activities (expla-
Student Performance in E-Learning Environments
nations, recommendations). Other responsibilities which were pointed out by the students are: promoting the collaborative learning, monitoring the students’ participation, facilitating/ moderating communication, providing well-structured materials, presenting real case studies, implementing strategies for adult education, being actively involved in discussions with the students. 70% of the students prefer communicating with their teacher on the online platform, 24% of them prefer e-mailing and just 6% like face-to-face discussions. As interactivity is important to 66% of the questioned students, we asked for the most valuable interaction techniques which can be used by a teacher. The following techniques and instruments were identified as being useful: feed-back, open and creative questions, team work, debate subjects, short activities that have a percent in the final grade, online presentations. It is good that students are willing to give feed-back to their teachers, as this “can provide a rich source of information to help the instructors evaluate specific elements of course design and structure, make revisions, and assess the effects of those changes” (Brew, 2008). Synthetic Analysis. Due to the fact that most students don’t have much experience in project management, as one can see from respondents’ distribution, their main concern about the online programme aims at improving the content of the courses. Most of them are already employed, so they are capable of saying if the courses have practical applicability or not, as some answers to the open questions also demonstrated us. From their suggestions and comments, we also identified some problems related to the online learning: • •
they request a reliable e-learning platform, with as few downtimes as possible; they ask for a higher reaction time from their instructors (some complained that the instructors don’t enter the platform after posting the courses); they even proposed a
•
• • • •
• •
•
maximum allowed interval for answering at posted questions; because of the lack of time (this is the main reason for enrolling to an online programme and not to a classical one), students want to do as many activities as possible online, including examination and paying their tuition fees; they ask for a standard format of the courses; they want to access easily all information, so good searching tools would be desirable; they want a quick update of announcements, maybe through group e-mails also; they feel the need of face-to-face meetings, in some key moments, such as the beginning of the semester; they want to be sure that the programme they are following is recognized; they want their diploma to be as valid and valuable as one obtained from a traditional programme; they want courses with a much more practical orientation, as they want to use the gained knowledge at work;
The fulfillment of the previous listed requests and the necessity of good courses content would make the students of an online educational programme to achieve a higher level of performance. In our respondents’ opinion, the fulfillment of trainee’s needs in an online education programme has the main impact on student’s performance (see Figure 5i). According to the R square value (0.86), the endogenous variables are explained in 86% proportion by the exogenous ones, so the students’ performance is positively influenced by satisfaction degree of organization and technical platform, satisfaction degree of syllabus, satisfaction degree of trainers’ involvement, trainee’s commitment and degree of fulfillment of trainee’s needs. The Wald statistic test came to strengthen the idea that all considered variables have an impact on quality. The statistics from Figure 5i
163
Student Performance in E-Learning Environments
was made with EViews tool and Ordinary Least Squares method was applied. The success might depend also on the number of hours spent daily in virtual environments or the field of activity in which they work. So, the regression model from Figure 5 might be improved by a deeper analysis.
DATA PREPARATION FOR DATA MINING Data mining database contains 129 records and 151 attributes. There are 31 numeric attributes and 120 nominal attributes. The main data preparation activities are: •
• •
•
•
•
164
It was checked the fill-out level of the main data mining table and the relevance of each attribute; Information about students performance their grades and failed exams was selected. Information was about student’s activities on the e-learning platform, number and type of interventions, classified according (Bulu & Yildirim, 2008) were defined, based on the reports provided by the elearning platform. Information about students background was collected from the Student Profile database The annual average grade has been calculated for each student, for each year, based on the information related to the examination results. The number of failed exams in combination with the average will provide us the performance level for every student. The Figure 3 shows the characteristics of each performance level. The development of the Data mining database, having the following tables: TDM1, TDM2, TDM3.
The algorithms used for data pre-processing were: •
•
AttributeSelection algorithm together with InfoGainAttributeEval evaluator, for attribute filtering AttributeRanking evaluation method and Ranker ordering method for selecting attributes
THE MODELING PHASE; THE MAIN FINDINGS Three types of data mining approaches were adopted in this study. The first approach is clustering, combined with the feature selection to determine the importance of the prediction variables. The second type of data mining approach is conducted by using different classification trees. We decided to use the classification tree models because of some advantages they may have over traditional statistical models such as logistic regression and discriminant analysis. The classification tree can deal with a high number of predictor variables and, as non-parametric models they are able capture nonlinear relationships and complex interactions between predictors and dependent variable. The third data mining approach is the association. The task involves the discovery of association between variables with a user-defined accuracy (confidence factor) and relative frequency (support factor).
CLUSTERING Ten attributes out of 151 were used to identify the relationship between students overall satisfaction regarding the online master programme and their learning performance. The most important attributes for the class EVALUATION_Performance_Class_I are shown in Box 1.
Student Performance in E-Learning Environments
Box 1. 0.1856 EVALUATION_Performance_Class_II 0.0846 2_8_SYLLABUS_WorkImpact 0.0799 3_5_PLATFORM_Flexibility 0.0645 Platform_access_total_number_Class 0.0534 2_6_SYLLABUS_RessourceSufficiency 0.042 1_5_NEEDS_InitialRequirements 0.0385 5_5_INSTRUCTORS_CommunicationEffeciency 0.0368 3_1_PLATFORM_Adequacy 0.0181 Maximum_access_number_Class
The most important attributes for the class EVALUATION_Performance_Class_II are listed in Box 2. To develop the clustering model, the Simple K-Mean algorithm is applied using Weka platform (Bouckaert, Frank, Hall, Kirkby, Reutemann, Seewald & Scuse, 2010). The first cluster analysis experiment is related to the overall student satisfaction and the way in which this satisfaction influences student’s learning performance. The 13 attributes listed in Box 3 were chosen. The cluster analysis results are presented in Figure 6. The student profile for Cluster 0, according to cluster centroid is: •
•
•
• • •
The student considers that the online programme meets the initial requirements that he had when he enrolled for it in a satisfactory mode (3); The student considers that the resources provided at the courses are sufficient (2) to acquire the knowledge he needs The theme of the master projects correspond to the work activities in to some extend (3); The study platform used in online MIP is appropriate to some extend; The student must be involved in any decision regarding the learning activity; The communication between teachers and student is efficient enough.
Box 2. .1856 EVALUATION_Performance_Class_I 0.0762 1_5_NEEDS_InitialRequirements 0.0683 3_5_PLATFORM_Flexibility 0.0626 2_8_SYLLABUS_WorkImpact 0.0574 3_1_PLATFORM_Adequacy 0.0563 5_5_INSTRUCTORS_CommunicationEffeciency 0.0514 Platform_access_total_number_Class 0.0416 Maximum_access_number_Class 0.0327 2_6_SYLLABUS_RessourceSufficiency
Box 3. 1_5_NEEDS_InitialRequirements 2_6_SYLLABUS_RessourceSufficiency 2_8_SYLLABUS_WorkImpact 3_1_PLATFORM_Adequacy 3_5_PLATFORM_Flexibility 5_5_INSTRUCTORS_CommunicationEffeciency EVALUATION_Performance_Class_I EVALUATION_Performance_Class_II Maximum_access_number_Class Platform_access_total_number_Class
•
• •
The performance class for the first and second year is 3, meaning the average exams grades is over 8 and number of failed exams is less than 2. The maximum access platform number is between 26 and 40. The platform access total number is less than 600.
This profile denotes a student whose expectations were less satisfied at the end of programme, at the enrolment the student has already some knowledge in project management domain, so the student learned easily during the master.. That is why the performance of the students from this cluster is high. The student profile for Cluster 1, according to cluster centroid is: •
The student considers that the online programme meets very much the requirements that he had when he enrolled (1)
165
Student Performance in E-Learning Environments
Figure 6. The first clustering analyses using Simple K-Means
•
• •
• • •
166
The student considers that the resources provided at courses are sufficient enough (1) to acquire the knowledge he needs; The theme of the master projects correspond to the work activities a lot; The study platform used in online MIP is appropriate in a big proportion for the purpose it is used and according to students requirements. The student considers that he must choose homework deadlines; Communication between students and teachers is good enough The performance class for the first and second year is 2, meaning the average exams
• •
grades is between 7 and 8 and number of failed exams is less than 2. The maximum access platform number is high, meaning over 41. The platform access total number is between 1001 and 2000.
The student from cluster 1 has a high level of satisfaction. The programme has offered a good understanding of the project management domain. The performance class is not as good as the one from cluster 0, but still is a good one, the average for the both years being between 7 and 8 and the failed exams number less 2. Cluster 0 has 62% of instances, meaning 80 instances out of 129. The cluster 1 has 38% of
Student Performance in E-Learning Environments
instances having the other 49 instances. The majority of the students enrolled for this master degree programme were initiated into the basics aspects of its domain. Most of the students, belonging to cluster 0, do not need much time to spend on the platform, for communication, information extraction and resources checking.
The Second Cluster Analysis Experiment In this experiment, the question to be answered is how the student background, the graduated faculty and the number of the e-learning platform access influence the student performance in second year. The 6 attributes listed in Box 4 were chosen. The results are presented in Figure 7.
The student profile for Cluster 0, according to cluster centroid is: •
• •
Students consider that anyone could be enrolled in the online master degree programme and some basic knowledge are useless; Extra resources are occasionally used; Most of the students graduated the Economic Cybernetics, Statistics and
Box 4. 4_1_COMMITMENT_CandidatesSelection 4_3_COMMITMENT_ExtraRessourcesUse 6_2_INFORMATION_GraduatedFaculty 6_4_2_INFORMATION_Experience_Class EVALUATION_Performance_Class_II Platform_access_total_number_Class
Figure 7. The second clustering analyses using Simple K-Means
167
Student Performance in E-Learning Environments
•
•
•
Informatics (ECSI) Faculty, the Academy of Economic Studies The experience in project management in years is almost zero (that is why they don’t consider the anterior knowledge to be necessary); The total number of platform accesses is between 1001 and 2000, meaning that the students accessed the platform a lot. Performance class is 2, meaning the GPA in the second year is over 8 and failed exams number less than 2.
Box 5. 2_12_SYLLABUS_EvaluationRelevancy 4_6_COMMITMENT_Evaluation 5_1_INSTRUCTORS_Impact 5_5_INSTRUCTORS_CommunicationEffeciency 5_6_INSTRUCTORS_CommunicationInvolvement €€€€€€€€€€EVALUATION_Performance_Project_Class_II
learning style, taking into consideration the performance class. In the same time, students from cluster 1, despite their experience had obtained a weaker performance.
The student profile for Cluster 1, according to cluster centroid is:
The Third Cluster Analysis Experiment
•
In this experiment, the question to be answered is if there is a connection between evaluation relevancy and performance for the second year based on communication involvement, communication efficiency, online activities involvement and the teacher impact. The 6 attributes listed in Box 5 were chosen. The results are presented in Figure 8. The student profile for Cluster 0, according to cluster centroid is:
• •
• • •
Students consider that admission exam has to be required; Extra resources are often used; Most of the students graduated the Automatic Control and Computer Faculty, Politechnique University Bucharest. Project management experience is between 1 and 4 years; The total number of platform accesses is between 601 and 1000; Performance class is 2, meaning an GPA between 7 and 8 and less than 2 failed exams.
We can conclude that the graduated faculty has a big influence over the learning process and also over the obtained performance. Students who graduated ECSI Faculty are more accommodated with teachers’ learning style and with their demands in contrast with the cluster1. So, even in the cluster 1, the experience is bigger and the extra resources are used more often, the performance class is only 2. Cluster 0 has 71% of instances, meaning 91 instances up to 129. The cluster 1 has 29% of instances having the other 38 instances. The majority of the students enrolled for this master degree programme are comfortable with teachers’
168
• •
• •
• •
The assessments reflect in a big proportion the knowledge level; There is a strong connection between platform involvement level and evaluations results; The teacher’s role is considered to be very important; Communication between teacher and student is good, making possible to communicate the most important topics; Communication level with the teacher is pretty high; Project performance class is 2, meaning that the exams average is between 7 and 8.
The student profile for Cluster 1, according to cluster centroid is:
Student Performance in E-Learning Environments
Figure 8. The third clustering analyses using Simple K-Means
• •
• •
• •
The assessments reflect in some extend the knowledge level; There is not a strong connection between platform involvement level and evaluations results The teacher role is considered to be very important; Communication between teacher and student is pretty good, making possible to communicate most important elements and also details of them; Communication level with the teacher is high; Project Performance class is 1, meaning that the exams average is between 6 and 7.
We can conclude that the students of cluster 1 consider assessments important but not so much,
they are involved in communication with teachers and their colleagues, and they have good grades to projects. The students belonging to cluster 0, despite the fact they consider the evaluation relevancy very important, they are not so involved in communication, having a lower performance class.
The Fourth Cluster Analysis Experiment The objective of this experiment is showing how the time spent in front of the computer influence the activity on the platform and the performance in e-learning environments in the second year. The 5 attributes listed in Box 6 were chosen. The results are presented in Figure 9. The student profile for Cluster 0, according to cluster centroid is:
169
Student Performance in E-Learning Environments
Figure 9. The forth clustering analyses using Simple K-Means
• •
• •
Participation at forums is only occasional; The face-to-face meetings are not so important and do not have a big influence over the learning process; The number of hours spent in front of the computer is big, being between 10 and 16; The total number of platform accesses is between 1001 and 2000;
Box 6. 3_4_PLATFORM_ForumsParticipation 3_6_PLATFORM_FaceMeetings 6_7_2_INFORMATION_DailyActivity_Class EVALUATION_Performance_Class_II Platform_access_total_number_Class
170
•
Performance class is 2, meaning a GPA over 8 and less than 2 failed exams.
The student profile for Cluster 1, according to cluster centroid is: • •
• • •
Participation at forums is not very often; The face-to-face meetings are not so important and do not have a big influence over the learning process; The number of hours spent in front of the computer is low, between 0 and 2; The total number of platform accesses is medium, between 601 and 1000; Performance class is 1, meaning a GPA between 7 and 8 and less than 2 failed exams.
Student Performance in E-Learning Environments
Figure 10. The fifth clustering analyses using Simple K-Means
We can conclude that students who spend more time in front of the computer tend to be more active on the platform. This fact leads to a better performance than those belonging to the other cluster.
The Fifth Cluster Analysis Experiment The fifth experiment tries to establish a relationship between platform satisfaction and academic performance. The 5 attributes listed in Box 7 were chosen. The results are presented in Figure 10. The student profile for Cluster 0, according to cluster centroid is:
Box 7. 3_1_PLATFORM_Adequacy 3_4_PLATFORM_ForumsParticipation 3_5_PLATFORM_Flexibility EVALUATION_Performance_Class_II EVALUATION_Performance_Project_Class_II
• • • •
The platform adequacy is considered to begood to some extent; Forum participation is occasional; Students want be involved in any decision regarding the learning activity; Academic performance, including the practical score is very good.
The student profile for Cluster 1, according to cluster centroid is:
171
Student Performance in E-Learning Environments
• • • •
The platform is consider as being totally suitable; There is a weak forum participation; They want to choose homework deadlines; Performance class is lower than of the students belonging to the cluster 0.
We can conclude that students with higher technical expectation are not satisfied with the elearning platform. These students participate also occasionally to the open forums and have a better performance than those from the other cluster.
CLASSIFICATION During classification experiments, the J48 algorithm is used. The result is a decision tree having the goal to classify the instances based on a specified attribute (class attribute).
Box 8. 1_5_NEEDS_InitialRequirements 2_6_SYLLABUS_RessourceSufficiency 2_8_SYLLABUS_WorkImpact 3_1_PLATFORM_Adequacy 3_5_PLATFORM_Flexibility 5_5_INSTRUCTORS_CommunicationEffeciency EVALUATION_Performance_Class_I Maximum_access_number_Class Platform_access_total_number_Class
The First Classification Experiment The objective of the first experiment is to show when the overall satisfaction is accomplished and how it influences the performance (Box 8). The experiment results are presented in Figure 11. The characteristics of the results are: • •
Correctly Classified Instances = 108, meaning 83,7%; Incorrectly Classified Instances = 21, meaning 16.2%;
Figure 11. Results of the first classification experiment using J48
172
Student Performance in E-Learning Environments
•
• • • •
Kappa statistic (means “fulfil prediction level”) = 0,73. It is a good value (maximum is 1); Mean absolute error = 0,107; Root mean squared error = 0,2322; Relative absolute error = 34,9%; Root relative squared error = 59,34%
Confusion matrix is shown in Box 9. Confusion matrix shows that for the first line, 14 instances were correct classified in class 0 (a), 3 instances were incorrect classified in class 3 instead of the class 0 (A). Based on confusion matrix, some indicators can be compute, such as: •
•
• •
TP rate (Rata true - positive); It represents the proportion in which the examples were classified in class x according to the whole examples number that belong to that class. In our case, for class a we have 14/ (14+0+3); It represents the proportion in which the examples were classified in class x according to the whole examples number that belong to another class. Precision, representing the proportion of instances that belong to class x from the whole number of instances and were classified in class x. For class a, Precision = 1. Recall = TP rate F-Measure (Measure F) is computing as: 2*Precision*Recall/(Precision + Recall).
Box 9. Confusion matrix a b c d <-- classified as 14 0 0 3 | a = 0 0100|b=1 4 0 46 7 | c = 2 3 0 4 47 | d = 3
This rule says that if the performance class for the first year is 3 (GPA over 8 and less than 2 failed exams) and the student thinks that he must be involved in any decision regarding the learning activity, then the Performance class for the second year is 3 (GPA over 8 and less than 2 failed exams). This rule can help us to predict students’ performance. In order to increase this performance, students have to be more actively involved in decision making.
The Second Classification Experiment The objective of the second experiment is to find out in what circumstance the performance of the second year depends on student background, graduated faculty, extra resources, candidate admission procedure and experience class (Box 11). The experiment results are presented in Figure 12. • •
Correctly Classified Instances = 105, meaning 81,3%; Incorrectly Classified Instances = 24, meaning 18.6%; Kappa statistic = 0,69. Mean absolute error = 0,1212; Root mean squared error = 0,2462; Relative absolute error = 39,3%; Root relative squared error = 62,93%
In order to obtain a rule set, the PART algorithm is used. The generated rule list is shown in Box 10. The class attribute is EVALUATION_Performace_Class_II. Let suppose we consider the rule: EVALUATION_Performance_Class_I = 3 AND
• • • • •
3_5_PLATFORM_Flexibility = Involvement_in_ any_decision_regarding_the_learning_activity: 3 (18.0/3.0)
Confusion matrix is shown in Box 12. Because the elements of the principal diagonal have big values, we can say that the instances
173
Student Performance in E-Learning Environments
Box 10. EVALUATION_Performace_Class_I = 3 AND 3_5_PLATFORM_Flexibility = Involvement_in_any_decision_regarding_the_learning_activity: 3 (18.0/3.0) EVALUATION_Performace_Class_I = 3 AND 3_5_PLATFORM_Flexibility = Choose_own_homework_deadlines AND 2_6_SYLLABUS_RessourceSufficiency = 2: 3 (5.0/1.0) EVALUATION_Performace_Class_I = 3: 3 (11.0/4.0) 3_5_PLATFORM_Flexibility = The_instructor_decides_the_educational_activities: 2 (7.0/3.0) Platform_access_total_number_Class = 3 AND EVALUATION_Performace_Class_I = 2: 2 (11.0/1.0) 1_5_NEEDS_InitialRequirements = 1 AND 2_6_SYLLABUS_RessourceSufficiency = 1 AND 5_5_INSTRUCTORS_CommunicationEffeciency = 2: 2 (7.0/1.0) 1_5_NEEDS_InitialRequirements = 2 AND 2_8_SYLLABUS_WorkImpact = 2 AND 3_5_PLATFORM_Flexibility = Involvement_in_any_decision_regarding_the_learning_activity: 2 (6.0/3.0) 3_1_PLATFORM_Adequacy = YES AND 5_5_INSTRUCTORS_CommunicationEffeciency = 3 AND 3_5_PLATFORM_Flexibility = Choose_own_homework_deadlines: 2 (7.0/1.0) 1_5_NEEDS_InitialRequirements = 2 AND 2_8_SYLLABUS_WorkImpact = 3: 3 (5.0) Platform_access_total_number_Class = 1 AND 5_5_INSTRUCTORS_CommunicationEffeciency = 4: 2 (4.0) Platform_access_total_number_Class = 1 AND Maximum_access_number_Class = 0: 3 (2.0) Platform_access_total_number_Class = 0: 2 (22.0/12.0) Platform_access_total_number_Class = 2 AND 1_5_NEEDS_InitialRequirements = 3 AND 3_1_PLATFORM_Adequacy = To_some_extent: 3 (6.0/2.0) 3_1_PLATFORM_Adequacy = Not_good_enough: 3 (5.0/2.0) 5_5_INSTRUCTORS_CommunicationEffeciency = 3 AND Platform_access_total_number_Class = 2: 0 (4.0/1.0)
Figure 12. Results of the second classification experiment using J48
174
Student Performance in E-Learning Environments
Box 11.
Box 12. Confusion matrix
4_1_COMMITMENT_CandidatesSelection 4_3_COMMITMENT_ExtraRessourcesUse 6_2_INFORMATION_GraduatedFaculty 6_4_2_INFORMATION_Experience_Class EVALUATION_Performance_Class_II Platform_access_total_number_Class
a b c d <-- classified as 13 0 2 2 | a = 0 0001|b=1 2 0 50 5 | c = 2 2 0 10 42 | d = 3
Box 15.
Box 13. 6_2_INFORMATION_GraduatedFaculty = AES_CSIE | 4_1_COMMITMENT_CandidatesSelection = Anyone_ could_register | | 6_4_2_INFORMATION_Experience_Class = 0: 3 (10.0/3.0)
2_12_SYLLABUS_EvaluationRelevancy 4_6_COMMITMENT_Evaluation 5_1_INSTRUCTORS_Impact 5_5_INSTRUCTORS_CommunicationEffeciency 5_6_INSTRUCTORS_CommunicationInvolvement EVALUATION_Performance_Project_Class_II
Box 14. 6_2_INFORMATION_GraduatedFaculty = PUB_Automatics AND 4_1_COMMITMENT_CandidatesSelection = An_initial_check_is_required AND 4_3_COMMITMENT_ExtraRessourcesUse = 3: 3 (5.0/2.0)
assessment based on class it was made correctly in 81%. The next rule is respected by 10 instances (Box 13). In order to obtain a rules set, the PART algorithm is used. The attribute class is EVALUATION_Performace_Class_II (Box 14).
The Third Classification Experiment The objective of the third experiment is to show in what condition the practical performance in the second year is influenced by the involvement in online activities, teacher impact, communication efficiency and the communication involvement (Box 15). Experiment results are presented in Figure 13. • •
Correctly Classified Instances = 91, meaning 70,5%; Incorrectly Classified Instances = 38, meaning 29.4%;
• • • • •
Kappa statistic = 0,58.; Mean absolute error = 0,1812; Root mean squared error = 0,301; Relative absolute error = 51,30%; Root relative squared error = 71,69%
Confusion matrix is shown in Box 16. In this case, the number of incorrectly classified instances is quite big, but even so we can consider having a good classification base on class attribute. A part of the decision tree is presented in Box 17. When evaluation relevancy is very big, and communication efficiency is quite big and activity involvement is medium then the performance is medium (GPA between 7 and 8). Using PART algorithm a rule set is obtained. A rule is described in Box 18.
175
Student Performance in E-Learning Environments
Figure 13. Results of the third classification experiment using J48
Box 16. Confusion matrix a b c d <-- classified as 7030|a=0 1 23 4 6 | b = 1 3 4 36 4 | c = 2 3 7 3 25 | d = 3
The Fourth Classification Experiment The fourth experiment tries to establish a connection between the condition that must be accomplished for certain values of class attribute based on the time spent in front of the computer and platform activity (Box 19). The experiment results are presented in Figure 14. • • • • •
176
Correctly Classified Instances = 97, meaning 75,1; Incorrectly Classified Instances = 32, meaning 24.8%; Kappa statistic = 0,58. Mean absolute error = 0,1617; Root mean squared error = 0,2843;
Box 17. 2_12_SYLLABUS_EvaluationRelevancy = 1 | 5_5_INSTRUCTORS_CommunicationEffeciency = 1: 3 (4.0) | 5_5_INSTRUCTORS_CommunicationEffeciency = 2 | | 4_6_COMMITMENT_Evaluation = 1: 1 (0.0) | | 4_6_COMMITMENT_Evaluation = 2: 1 (1.0) | | 4_6_COMMITMENT_Evaluation = 3: 2 (2.0/1.0) | | 4_6_COMMITMENT_Evaluation = 4: 1 (0.0) | | 4_6_COMMITMENT_Evaluation = 5: 1 (2.0/1.0) | 5_5_INSTRUCTORS_CommunicationEffeciency = 3: 3 (6.0/2.0)
• •
21.
Relative absolute error = 52,46%; Root relative squared error = 72,66% Confusion matrix is shown in Box 20. A part of the decision tree is presented in Box
Box 18. 5_1_INSTRUCTORS_Impact = 1 AND 2_12_SYLLABUS_EvaluationRelevancy = 2 AND 5_5_INSTRUCTORS_CommunicationEffeciency = 3: 2 (15.0/6.0)
Student Performance in E-Learning Environments
Box 19. 3_4_PLATFORM_ForumsParticipation 3_6_PLATFORM_FaceMeetings 6_7_2_INFORMATION_DailyActivity_Class EVALUATION_Performance_Class_II Platform_access_total_number_Class
Box 20. Confusion matrix a b c d <-- classified as 9062|a=0 0010|b=1 1 0 49 7 | c = 2 3 0 12 39 | d = 3
Figure 14. Results of the forth classification experiment using J48
Box 21. 3_6_PLATFORM_FaceMeetings = 4 | 3_4_PLATFORM_ForumsParticipation = Often_enough: 2 (1.0)
If the face-to-face meetings are considered not very important, and the participation at different forums is high then GPA is between 7 and 8.
ASSOCIATION RULES The associations rules were used to find associations between attributes that describe the way in which the students’ initial requirements were met.
The algorithm used for finding the association rules is an Apriori - type algorithm. This type of algorithm iteratively reduces the minimum support until it finds the required number of rules with the given minimum confidence. The algorithm has an option to mine class association rules. The attributes used are shown in Box 22. The algorithm has, for a confidence factor grater or equal than 0.8 and the “car – class association rules” parameter set as False (this means that general association rules are generated), the following rules as results: 1_5_NEEDS_InitialRequirements=1 5_4_INSTRUCTORS_ CommunicationMethod=Forums 17 ==> 3_1_
177
Student Performance in E-Learning Environments
Box 22. 1_5_NEEDS_InitialRequirements 2_6_SYLLABUS_RessourceSufficiency 2_7_SYLLABUS_ProjectsRelevancy 2_8_SYLLABUS_WorkImpact 3_1_PLATFORM_Adequacy 5_4_INSTRUCTORS_CommunicationMethod 5_7_INSTRUCTORS_InteractivityTechniques 5_8_INSTRUCTORS_MaterialsQuality
PLATFORM_Adequacy=YES 16 conf: (0.94) 1. Comment: Where students consider that the MIP online programme meets very much the requirements that they had when they enrolled and where the platform forum is the favorite communication method, the platform is consider as being highly appropriate. For the first part there are 17 instances and for the second 16 instances. That is why the confident factor for this rule is 0.94 where the maximum is 1 (Box 23).
The experiments results are presented in Figure 15. The algorithm has, for a confidence factor greater or equal than 0.6 and the “car – class association rules” parameter set as True (class association rules are mined instead of general association rules.), the following rules as results. The class is the attribute 1_5_NEEDS_InitialRequirements that represents the student general satisfaction level. 1.2_6_SYLLABUS_RessourceSufficiency=3 5_7_INSTRUCTORS_InteractivityTechniqu es=Feedback 15 ==> 1_5_NEEDS_InitialRequirements=3 13 conf:(0.87) Comment: Where the resources provided at the courses are being sufficient at a medium level and the preferred interactivity technique is the feedback received from the teacher then the attribute class has the value 3 meaning that
Box 23. 2. 1_5_NEEDS_InitialRequirements=1 2_6_SYLLABUS_RessourceSufficiency=1 14 ==> 3_1_PLATFORM_Adequacy=YES 13 conf:(0.93) 3. 3_1_PLATFORM_Adequacy=YES 5_8_INSTRUCTORS_MaterialsQuality=3 14 ==> 5_4_INSTRUCTORS_ CommunicationMethod=Forums 13 conf:(0.93) 4. 2_6_SYLLABUS_RessourceSufficiency=1 5_8_INSTRUCTORS_MaterialsQuality=2 17 ==> 5_4_INSTRUCTORS_ CommunicationMethod=Forums 15 conf:(0.88) 5. 2_7_SYLLABUS_ProjectsRelevancy=1 3_1_PLATFORM_Adequacy=YES 17 ==> 5_4_INSTRUCTORS_ CommunicationMethod=Forums 15 conf:(0.88) 6. 2_6_SYLLABUS_RessourceSufficiency=3 5_7_INSTRUCTORS_InteractivityTechniques=Feedback 15 ==> 1_5_NEEDS_InitialRequirements=3 13 conf:(0.87) 7. 1_5_NEEDS_InitialRequirements=1 25 ==> 3_1_PLATFORM_Adequacy=YES 21 conf:(0.84) 8. 1_5_NEEDS_InitialRequirements=1 5_8_INSTRUCTORS_MaterialsQuality=2 17 ==> 3_1_PLATFORM_Adequacy=YES 14 conf:(0.82) 9. 2_6_SYLLABUS_RessourceSufficiency=1 5_7_INSTRUCTORS_InteractivityTechniques=Open_questions 16 ==> 5_4_INSTRUCTORS_CommunicationMethod=Forums 13 conf:(0.81) 10. 2_7_SYLLABUS_ProjectsRelevancy=2 3_1_PLATFORM_Adequacy=YES 5_7_INSTRUCTORS_InteractivityTechniques=Open_ questions 16 ==> 5_4_INSTRUCTORS_CommunicationMethod=Forums 13 conf:(0.81) 11. 1_5_NEEDS_InitialRequirements=2 3_1_PLATFORM_Adequacy=YES 20 ==> 5_4_INSTRUCTORS_ CommunicationMethod=Forums 16 conf:(0.8) 12. 1_5_NEEDS_InitialRequirements=3 5_4_INSTRUCTORS_CommunicationMethod=Forums 5_8_INSTRUCTORS_MaterialsQuality=3 20 ==> 2_7_SYLLABUS_ProjectsRelevancy=2 16 conf:(0.8) 13. 1_5_NEEDS_InitialRequirements=3 2_7_SYLLABUS_ProjectsRelevancy=2 5_8_INSTRUCTORS_MaterialsQuality=3 20 ==> 5_4_INSTRUCTORS_CommunicationMethod=Forums 16 conf:(0.8)
178
Student Performance in E-Learning Environments
Figure 15. Results of the first experiment using Apriori algorithm for association rules
Box 24. 2. 5_7_INSTRUCTORS_InteractivityTechniques=Feedback 5_8_INSTRUCTORS_MaterialsQuality=3 19 ==> 1_5_NEEDS_InitialRequirements=3 15 conf:(0.79) 3. 3_1_PLATFORM_Adequacy=To_some_extent 5_8_INSTRUCTORS_MaterialsQuality=3 17 ==> 1_5_NEEDS_InitialRequirements=3 13 conf:(0.76) 4. 2_7_SYLLABUS_ProjectsRelevancy=2 5_4_INSTRUCTORS_CommunicationMethod=Forums 5_8_INSTRUCTORS_MaterialsQuality=3 22 ==> 1_5_NEEDS_InitialRequirements=3 16 conf:(0.73) 5. 5_4_INSTRUCTORS_CommunicationMethod=Forums 5_8_INSTRUCTORS_MaterialsQuality=3 29 ==> 1_5_NEEDS_InitialRequirements=3 20 conf:(0.69) 6. 2_7_SYLLABUS_ProjectsRelevancy=2 5_8_INSTRUCTORS_MaterialsQuality=3 30 ==> 1_5_NEEDS_InitialRequirements=3 20 conf:(0.67) 7. 3_1_PLATFORM_Adequacy=To_some_extent 5_7_INSTRUCTORS_InteractivityTechniques=Feedback 23 ==> 1_5_NEEDS_InitialRequirements=3 15 conf:(0.65) 8. 5_8_INSTRUCTORS_MaterialsQuality=3 43 ==> 1_5_NEEDS_InitialRequirements=3 28 conf:(0.65) 9. 2_6_SYLLABUS_RessourceSufficiency=3 28 ==> 1_5_NEEDS_InitialRequirements=3 18 conf:(0.64) 10. 3_1_PLATFORM_Adequacy=To_some_extent 5_4_INSTRUCTORS_CommunicationMethod=Forums 24 ==> 1_5_NEEDS_InitialRequirements=3 15 conf:(0.63) 11. 2_8_SYLLABUS_WorkImpact=3 5_7_INSTRUCTORS_InteractivityTechniques=Feedback 21 ==> 1_5_NEEDS_InitialRequirements=3 13 conf:(0.62)
the initial requirements are satisfied but not too much. The confident factor for this rule is a good value, 0.87. (Box 24). The experiments results are presented in Figure 16.
FUTURE DEVELOPMENT The described data collection, transformation and analysis processes can be further refined in
order to extract knowledge about how the master program can be improved. The current data set can be extended in order to study the student performance dynamics for the entire program period. A study on the student profile evolution can be also performed. Based on this study, some predictions about the student retention could be developed. Attributes describing the teachers’ point of view about students’ activity, projects quality, the quality of their questions and answers will be included in the next survey. Knowing teacher 179
Student Performance in E-Learning Environments
Figure 16. Results of the second experiment using Apriori algorithm for association rules
expectations and satisfaction regarding student activities it make possible to identify the gaps between the expectations of different participants and to find the causes for these The analysis results will be used to improve the initial assessment during the admission process. Additional analysis can be done in relation to the e-learning platform, in order to identify the usage patterns in connection with the discipline types or teacher’s expectations. The reasons why some platform facilities are seldom or never used can be also discovered.
CONCLUSION Data mining experiments reveal interesting patterns existing in data. In the first cluster analysis experiment two student profiles were more obvious. In the first (cluster 0), the students’ expectations were less satisfied at the end of programme and they have already possessed knowledge in Project Management domain at the enrolment. That is why the performance class is higher than the one from the other cluster (cluster1) (students are learning more easily.) In the other cluster (cluster 1), the
180
satisfaction level is high. Knowledge accumulation and working with the online platform have offered a solid base for understanding the Project Management domain. The performance class is not as good as the one from cluster 0, but still is a good one. The statistical model from Figure 5i shows a positive correlation between the degree of fulfillment of trainee’s needs and the student’s performance. This finding is slightly opposite to the results given by data mining clustering: in cluster 0, students have better learning results, while their expectations are less fulfilled. The data mining results can be considered a method in which the statistical model can be improved: it suggests another endogenous variable which should be taken into consideration - the initial level of knowledge. In second cluster analysis experiment, it can be said that the graduated faculty has a big influence over the e-learning process and also over the students’ performance. Students who graduated ECSI Faculty are more accommodated with teachers learning style and with their demands in contrast with the others from the other cluster. So, although the students in the second cluster have a better experience and the extra resources
Student Performance in E-Learning Environments
are used more often, the performance class is only a good one. This finding is explained by the preliminary statistical analysis, which showed that the students who work (and have experience) consider that the co-workers don’t support them in learning activities. So, the professional experience is gained, while the learning performance at the master is reduced. In the third cluster analysis experiment it was revealed the fact that students from the second cluster consider evaluation relevancy important but not too much, they are involved in communication with teachers and their colleagues, and they have good grades to projects. The ones from the first cluster, despite the fact they consider the evaluation relevancy very important, they are not so involved in communication process, having a lower performance class. The attributes of the first cluster are shown by the statistical analysis also: most students consider that evaluations reflect their knowledge on a certain topic. Again, the data mining analysis provided a more accurate view: features of the students who believe or not in evaluation are provided. In the fourth cluster analysis experiment, it was revealed the fact that students that spend more time in front of the computer tend to be more active on the platform. This fact leads also to a better performance. In the fifth cluster analysis experiment, it was revealed the fact that for students with higher expectations from the platform tools, the learning platform isn’t appropriately built. These students also participate now and then to the open forums and have a better performance than those from cluster 1. An important result offered by the first classification experiment indicates that if performance class for the first year is 3 (average over 8 and less than 2 failed exams) and the student thinks that he must be involved in any decision regarding the learning activity, then the Performance class for the second year will be 3 (meaning the average is over 8 and less than 2 failed exams).
This means there is a constant determination for learning along the whole master period. In the second classification experiment, the fact that the students who graduated Economic Cybernetics, Statistics and Informatics (ECSI) Faculty (one from the most important faculty within The Bucharest Academy of Economic Studies) have a very good performance is confirmed. In many cases when evaluation relevancy is very big, communication efficiency is quite big and activity involvement is medium then the performance is medium (average between 7 and 8). So, there is a strong connection between the students’ activity and their performance in e-learning environment. Based on obtained association rules and their confident factor, in case that general association rules are generated, the following rule is the most important: If students consider that the online programme meets very much the requirements that they had when they enrolled and the favorite communication method is the platform forum, then the platform is considered as being highly appropriate. For the first part there are 17 instances and for the second 16 instances. That is why the confident factor for this rule is 0.94. The study offers a detailed analysis of factors which influence students’ performance in online courses. We consider data mining techniques to be more suitable for the analysis, as we had to process a considerable amount of data: for each student, we had the opportunity to evaluate attributes showing his academic performance (grades) and also personal perceptions of the factors influencing the performance.
REFERENCES Arbaugh, J. B. (2004). Learning to learn online: A study of perceptual changes between multiple online course experiences. The Internet and Higher Education, 7, 169–182. doi:10.1016/j. iheduc.2004.06.001
181
Student Performance in E-Learning Environments
Baker, R., & Yacef, K. (2009). The state of educational data mining in 2009: A review and future visions. Journal of Educational Data Mining, 1, 3–17.
Bulu, S. T., & Yildirim, Z. (2008). Communication behaviors and trust in collaborative online teams. Journal of Educational Technology & Society, 11(1), 132–147.
Barros, B., & Verdejo, M. F. (2000). Analyzing student interaction processes in order to improve collaboration: The degree approach. International Journal of Artificial Intelligence in Education, 11, 221–241.
Chapman, C., Clinton, J., & Kerber, R (2005). CRISP-DM 1.0, step-by-step data mining guide.
Bodea, V. (2003). Standards for data mining languages. The Proceedings of the Sixth International Conference on Economic Informatics - Digital Economy, (pp. 502-506). INFOREC Printing House, ISBN 973-8360-02-1, Bucureşti. Bodea, V. (2007). Application and benefits of knowledge management in universities – a case study on student performance enhancement. Informatics in Knowledge Society, The Proceedings of the Eight International Conference on Informatics in Economy, May 17-18, ASE Printing House, (pp. 1033-1038). Bodea, V. (2008). Knowledge management systems. Ph.D thesis, supervised by Prof. Ion Gh. Roşca, The Academy of Economic Studies, Bucharest. Bodea, V., & Roşca, I. (2007). Analiza performanţelor studenţilor cu tehnici de data mining: studiu de caz în Academia de Studii Economice din Bucureşti. In Bodea, C., & Andone, I. (Eds.), Managementul cunoaşterii în universitatea modernă. Editura Academiei de Studii Economice din Bucureşti. Bouckaert, R., Frank, E., Hall, M., Kirkby, R., Reutemann, P., Seewald, A., & Scuse, D. (2010). WEKA manual for version 3-6-2. University of Waikato, Hamilton, New Zealand Brew, L. S. (2008). The role of student feedback in evaluating and revising a blended learning course. The Internet and Higher Education, 11, 98–105. doi:10.1016/j.iheduc.2008.06.002
182
Charpentier, M., Lafrance, C., & Paquette, G. (2006). International e-learning strategies: Key findings relevant to the Canadian context. Retrieved from http://www.ccl-cca.ca/pdfs/ CommissionedReports/JohnBissInternationalELearningEN.pdf Crisp-dm. (2010). CRoss Industry Standard Process for Data Mining. Retrieved from http:// www.crisp-dm.org/ Davenport, T. (2001). Successful knowledge management projects. Sloan Management Review, 39(2). Delavari, N., Beikzadeh, M. R., & Amnuaisuk, S. K. (2005). Application of enhanced analysis model for data mining processes in higher educational system. Proceedings of ITHET 6th Annual International Conference, Juan Dolio, Dominican Republic. Delavari, N., Beikzadeh, M. R., & Shirazi, M. R. A. (2004). A new model for using data mining in higher educational system. Proceedings of 5th International Conference on Information Technology based Higher Education and Training: ITEHT ’04, Istanbul, Turkey. European Commission. (2005). Mobilizing the brainpower of Europe: Enabling universities to make their full contribution to the Lisbon Strategy. Brussels, Communicate no. 152. Eurostat. (2009). The Bologna Process in higher education in Europe: Key indicators on the social dimension and mobility. European Communities and IS, Hochschul-Informations-System G mbH. Retrieved from http://epp.eurostat.ec.europa.eu/ portal/ page/portal/education/bologna_process
Student Performance in E-Learning Environments
Friedman, J. H. (1997). Data mining and statistics: What’s the connection?Stanford, CA: Standford University. Guardado, M., & Shi, L. (2007). ESL students’ experiences of online peer feedback. Computers and Composition, 24, 443–461. doi:10.1016/j. compcom.2007.03.002 Haddawy, P., & Hien, N. (2006). A decision support system for evaluating international student applications. Computer Science and Information management program. Asian Institute of Technology. Kelly, H. F., Ponton, M. K., & Rovai, A. P. (2007). A comparison of student evaluations of teaching between online and face-to-face courses. The Internet and Higher Education, 10, 89–101. doi:10.1016/j.iheduc.2007.02.001 Luan, J. (in press). Data mining applications in higher education. In New Directions for Institutional Research (1st ed.). San Francisco, CA: Jossey-Bass. Luan, J. (2002). Data mining and its applications in higher education. In Serban, A., & Luan, J. (Eds.), Knowledge management: Building a competitive advantage for higher education. New directions for Institutional Research, 113. San Francisco, CA: Jossey Bass. Luan, J., Zhai, M., Chen, J., Chow, T., Chang, L., & Zhao, C.-M. (2004). Concepts, myths, and case studies of data mining in higher education. AIR 44th Forum Boston. Ma, Y., Liu, B., Wong, C. K., Yu, P. S., & Lee, S. M. (2000). Targeting the right students using data mining. Proceedings of the Sixth ACM SIGKDD International Conference on Knowledge Discovery and data mining, (pp 457-464). Boston, MA.
McDonald, M., Dorn, B., & McDonald, G. (2004). A statistical analysis of student performance in online computer science courses. Proceedings of the 35th SIGCSE Technical Symposium on Computer Science Education, Norfolk, Virginia, (pp. 71-74). McFarland, D., & Hamilton, D. (2006). Factors affecting student performance and satisfaction: Online versus traditional course delivery. Journal of Computer Information Systems, 46(2), 25–32. Monolescu, D., & Schifter, C. (2000). Online focus group: A tool to evaluate online students’ course experience. The Internet and Higher Education, 2, 171–176. doi:10.1016/S1096-7516(00)00018-X Piccoli, G., Ahmad, R., & Ives, B. (2001). Webbased virtual learning environments: A research framework and a preliminary assessment of effectiveness in basic IT skills training. Management Information Systems Quarterly, 25(4), 401–426. doi:10.2307/3250989 Priluck, R. (2004). Web-assisted courses for business education: An examination of two sections of principles of marketing. Journal of Marketing Education, 26(2), 161–173. doi:10.1177/0273475304265635 Ramaswami, M., & Bhaskaran, R. (2010). A CHAID based performance prediction model in educational data mining. IJCSI International Journal of Computer Science Issues, 7(1), 10–18. Ranjan, J. (2008). Impact of Information Technology in academia. International Journal of Educational Management, 22(5), 442–455. doi:10.1108/09513540810883177 Ranjan, J., & Malik, K. (2007). Effective educational process: A data mining approach. Vine, 37(4), 502–515. doi:10.1108/03055720710838551
183
Student Performance in E-Learning Environments
Romero, C., & Ventura, S. (2007). Educational data mining: A survey from 1995 to 2005. Expert Systems with Applications, 33, 135–146. doi:10.1016/j.eswa.2006.04.005 Sargenti, P., Lightfoot, W., & Kehal, M. (2006). Diffusion of knowledge in and through higher education organizations. Issues in Information Systems, 3(2), 3–8. Shyamala, K., & Rajagopalan, S. P. (2006). Data mining model for a better higher educational system. Information Technology Journal, 5(3), 560–564. doi:10.3923/itj.2006.560.564 Talavera, L., & Gaudioso, E. (2004). Mining student data to characterize similar behavior groups in unstructured collaboration spaces. Proceedings of Workshop on Artificial Intelligence in Computer Supported Collaborative Learning at European Conference on Artificial Intelligence, Valencia, Spain, (pp. 17-23). Tallent-Runnels, M.-K. (2005). The relationship between problems with technology and graduate students’ evaluations of online teaching. The Internet and Higher Education, 8, 167–174. doi:10.1016/j.iheduc.2005.03.005 Waiyamai, K. (2004). Improving quality of graduate students by data mining. Faculty of Engineering, Kasetsart University, Frontiers of ICT Research International Symposium. Witten, I., & Frank, E. (2005). Data mining: Practical machine learning tools and techniques. Elsevier. Young, A., & Norgard, C. (2006). Assessing the quality of online courses from the students’ perspective. The Internet and Higher Education, 9, 107–115. doi:10.1016/j.iheduc.2006.03.001 Zapalska, A., Shao, D., & Shao, L. (2003). Student learning via WebCT course instruction in undergraduate-based business education. Teaching Online in Higher Education (Online). Conference.
184
ADDITIONAL READING Anjewierden, A., Kollöffel, B., & Hulshof, C. (2007). Towards educational data mining: Using data mining methods for automated chat analysis to understand and support inquiry learning processes. ADML 2007 (pp. 27–36). Crete. Bodea, C. 2007. An Innovative System for Learning Services in Project Management. In Proceedings of 2007 IEEE/INFORMS International Conference on Service Operations and Logistics. And Informatics. Philadelphia, USA, 2007. IEEE. Castells, M., & Pekka, H. (2002). The Information Society and the Welfare State. The Finnish Model. Oxford: Oxford University Press. Demirel, M. (2009). Lifelong learning and schools in the twenty-first century. Procedia Social and Behavioral Sciences, 1, 1709–1716. doi:10.1016/j. sbspro.2009.01.303 Garcia, A. C. B., Kunz, J., Ekstrom, M., & Kiviniemi, A. (2003). Building a Project Ontology with Extreme Collaboration and VD&C. CIFE Technical Report #152. Stanford University. Gareis, R. 2007. Happy Projects! Romanian version ed. Bucharest, Romania: ASE Printing House. Guardado, M., & Shi, L. (2007). ESL students’ experiences of online peer feedback. Computers and Composition, 24, 443–461. doi:10.1016/j. compcom.2007.03.002 Kalathur, S. (2006) An Object-Oriented Framework for Predicting Student Competency Level in an Incoming Class, Proceedings of SERP’06 Las Vegas, 2006, pp. 179-183 Kanellopoulos, D., Kotsiantis, S., & Pintelas, P. (2006). Ontology-based learning applications: a development methodology. In Proceedings of the 24th IASTED International Multi-Conference Software Engineering. Innsbruck, Austria, 2006.
Student Performance in E-Learning Environments
Lytras, M. D., Carroll, J. M., Damiani, E., & Tennyson, R. D. (2008). Emerging Technologies and Information Systems for the Knowledge Society. In First World Summit on the Knowledge Society. Athens, Greece: WSKS. Markkula, M. (2006). Creating Favourable Conditions for Knowledge Society through Knowledge Management, eGorvernance and eLearning. Budapest, Hungary, 2006. FIG Workshop on eGovernance, Knowledge Management and eLearning. Teekaput, P., & Waiwanijchakij, P. (2006). eLearning and Knowledge Management, Symptoms of a Reality. In Third International Conference on eLearning for Knowledge-Based Society. Bangkok, Thailand, 2006. Turner, R. J., & Simister, S. J. (2004). Gower Handbook of Project Management. Romanian version ed. Bucharest, Romania: Codecs Printing House. Young, A., & Norgard, C. (2006). Assessing the quality of online courses from the students’ perspective. The Internet and Higher Education, 9, 107–115. doi:10.1016/j.iheduc.2006.03.001
KEY TERMS AND DEFINITIONS Association Rule: An implication expression of the form X => Y where X and Y are disjoint conjunctions of attribute-value pairs. Strength of association rules can be measured in terms of support and confidence. Support determines how often a rule applies to a data set and confidence determines how frequently items appear in transactions that contain X. Association analysis has as objective to find hidden relationships in large sections of data sets. Classification: The process consisting in learning function f, which assigns a predefined class label y to each set of attributes X. The function f is known as the model for classification. A
classification model can serve as an explanatory tool to distinguish between instances of different classes. In this case, the classification is considered as a descriptive modeling. A classification model can also be used to predict the class label for the unknown instances. In this case, the classification is considered as a predictive modeling. Classification techniques are better suited for prediction or description of data sets for binary or nominal attributes. Clustering: A technique by which similar instances are grouped together. All the instances grouped in the same cluster have a certain understanding, a certain utility, or both. Clusters capture the natural structure of data and so, the clustering process might be the starting point for other data handling processes such as summarization. Data Mining: The process of extracting previously unknown, valid, and operational patterns/ models from large collection of data. Essential for data mining is the discovery of patterns without previous hypotheses. Data mining is not aimed to verify, confirm or refute hypothesis, but instead to discover “unexpected” patterns, completely unknown at the time of the data mining process take place, which may even contradict the intuitive perception. For this reason, the results are truly valuables. Decision Tree: A diagrammatic representation of the possible outcomes and events used in decision analysis. A decision tree with a range of discrete (symbolic) class labels is called a classification tree, whereas a decision tree with a range of continuous (numeric) values is called a regression tree. Decision trees are attractive because they show clearly how to reach a decision, and because they are easy to construct automatically from labeled instances. Two well known programs for constructing decision trees are C4.5 and CART (Classification and Regression Tree). Educational Data Mining: An emerging discipline, concerned with developing methods for exploring the unique types of data that come from educational settings, and using those meth-
185
Student Performance in E-Learning Environments
ods to better understand students, and the settings which they learn in E-Learning: A type of distance education in teaching-learning interaction is mediated by an environment set up by new information and
186
communication technologies, in particular the Internet. Internet is both the material environment, as well as the communication channel between the actors involved.
Student Performance in E-Learning Environments
APPENDIX Questionnaire 1. Trainee’s Needs (Motivation to Participate in an Online Education Program) 1.1. Why did you choose to follow this master programme? Mark the corresponding cell with X.
I work in projects and I want to improve professionally. I intend to work in projects and I’m particularly interested in this domain. I was convinced by friends that it is an easy programme. I have time and I believe that training in an y field is useful.
1.2. What made you enroll to online MIP and not the classical MIP? Mark the corresponding cell with X.
Lack of time (I have a full-time job.) Legal reasons (I am enrolled to another master, which is not online.) I need a diploma (I believe that online programmes are easier.). Other reason (Please specify it).
1.3. What do you think are the benefits of online MIP before the classical MIP? Mark the corresponding cell with X. I have access to information, without being forced to go to class. It gives me a community of practice. Other advantages (Please specify them.)
1.4. Have you taken other online programmes? Mark the corresponding cell with X. No Yes (Please indicate the type of programme, duration, the gained satisfaction.)
187
Student Performance in E-Learning Environments
1.5. Do you consider that the MIP online programme meets the requirements you had when you enrolled for it? (1 – very important, 5 – not important at all) Mark the answer. 12345
2. Syllabus & Training Providers 2.1. How important is that the MIP programme is a certified programme? (1 – very important, 5 – not important at all) Mark the answer. 12345 2.2. How important is that the MIP programme is organized by a well-known national certified organization? (1 – very important, 5 – not important at all) Mark the answer. 12345 2.3. Do your favorite subjects have a higher degree of interactivity? Mark the corresponding cell with X. Yes (interactivity helps me understand better). No (i don’t have time for interactivity). It doesn’t matter.
2.4. How important is for you the clear thematic content and requirements of a course? ? (1 – very important, 5 – not important at all) Mark the answer. 12345 2.5. What format of the course materials are most helpful? Mark the corresponding cell with X. Word documents Power Point slides Electronic books (in html, chm or any other format) I don’t care.
188
Student Performance in E-Learning Environments
2.6. Do you think the resources provided at the course (course support, project models/ case studies, templates) are sufficient enough to acquire the knowledge you need? (1- sufficient enough, 5-not sufficient at all) Mark the answer. 12345 2.7. Do you consider the projects developed at various disciplines relevant to the training? (1- very relevant, 5- not at al relevant) Mark the answer. 12345 2.8. Does the theme of the master projects correspond to the work activities? (1- very, 5-not at all) Mark the answer. 12345 2.9. Do you consider that the projects have a proper weight in the final grade, at the majority of the courses? (1- very, 5-not at all) Mark the answer. 12345 2.10. How important are the homework given during the course? (1 – very important, 5 – not important at all) Mark the answer. 12345 2.11. Do you prefer the on going assessment or the summative assessment? Mark the corresponding cell with X. Just the summative assessment Just the on going assessment Both of them
2.12. Do you think the results of evaluations so far reflect your knowledge? (1- very, 5-not at all) Mark the answer. 12345
189
Student Performance in E-Learning Environments
2.13. What kind of evaluation methods should be used and in what proportion? (1-always, 2-often, 3-sometimes, 4-never). Put each number in a different cell. Projects Peer review Multiple choice tests Others (Please, specify them.)
2.14. What is your favorite discipline so far? Justify your answer. 2.15. What is the discipline that created you most dissatisfaction? Justify your answer.
3. Organization and Technical Platform 3.1. Do you consider that the study platform used in online MIP is appropriate? Mark the corresponding cell with X. 3.2. Would it be important to organize a technical training session prior to using the platform? 1 No, because it is difficult to be used, from the technical point of view No, because it hasn’t all the technical facilities I need Yes To some extent
– very important, 5 – not important at all) Mark the answer. 12345 3.3. How important are the online discussions with your colleagues (forums)? 1 – very important, 5 – not important at all) Mark the answer. 12345 3.4. How often do you participate in online discussions? Mark the corresponding cell with X. Often enough, because I consider them useful. Not very often, I usually only read what the others say. Now and then, it depends on the feedback received from my colleagues and instructors. I never participated, I consider them useless.
190
Student Performance in E-Learning Environments
3.5. Do you consider that a higher flexibility in an online platform would help you get better results? Mark the corresponding cell with X. No, I prefer to have a predetermined syllabus. No, I prefer the instructor to decide the educational activities. Yes, I would like to be able to choose my homework deadlines. Yes, I would like to be involved in any decision regarding the learning activity.
3.6. What are the elements that you like in the organization and platform of MIP online programme? 3.7. What are the elements that you don’t like in the organization and platform of MIP online programme? 3.8. Do you consider the face-to-face meetings held in MIP programme useful? (1- very useful, 5 – not useful at all) Mark the answer. 12345 3.9. Do you consider that the MIP programme should continue with its online version in the following years, too? Mark the corresponding cell with X. Yes (please justify) No (please justify) I don’t care.
4. Trainee’s Commitment 4.1. Do you think a more careful selection (possibly by examination) of the candidates in an online program is needed or anyone interested and who meets legal requirements (minimum level of preparation) should be allowed to register? Mark the corresponding cell with X. €€€€€€€€€€Anyone could register. An initial test is required (test online). An initial check is required (CV or other documents).
191
Student Performance in E-Learning Environments
4.2. Which is the most important element for your performance in online classes? Mark the corresponding cell with X. Quality of the education platform The degree to which the programme gives me the knowledge that I need The feedback from the instructors Other factor (Please specify it.)
4.3. Did you search and use independently other resources than the one from the courses to deepen your knowledge on a certain subject? (1- very often, 5-never). Mark the answer. 12345 4.4. Did you apply the knowledge acquired at MIP to your workplace? Mark the corresponding cell with X. Yes, often enough Very seldom No, because what I learned has no utility at work. No, because I have no enough knowledge (the courses were too theoretical).
4.5. Do you prefer the projects developed individually or developed in team? Mark the corresponding cell with X. I prefer the projects developed individually, because the communication in a team is difficult. I prefer team work, because my collaborative skills can be developed. I prefer team work, because it is easier.
4.6. Do you think there is any connection between your involvement in online activities and evaluation results? (1- a very big connection, 5-no connection). Mark the answer. 12345
192
Student Performance in E-Learning Environments
4.7. Do the colleagues at work support you in attending this master programme? (1 – very much, 5- not at all). Mark the answer. 12345
5. Instructors’ Involvement 5.1. How important is the instructor’s role in an online educational programme? (1 – very important, 5 – not important at all) Mark the answer. 12345 5.2. How is the ideal instructor in an online class? Mark the corresponding cell with X. Involved (ready to offer answers, advices, explanations) Not so involved (I don’t really need an instructor) It doesn’t matter.
5.3. What do you think is the instructor’s role in MIP programme and how important is it? (1 – very important, 5 – not important at all) Put a different mark in each cell. The instructor/teacher facilitates/moderates communication in virtual community. The instructor/teacher monitors the master students’ participation. The instructor/teacher promotes the collaborative learning. The instructor/teacher offers support for learning activities (explanations, recommendations) Other (Please specify it.)
5.4. What is your favorite method of communication with your instructor/ teacher? Mark the corresponding cell with X. On the platform, on forums On the platform, using online meetings Through e-mail Face-to-face
193
Student Performance in E-Learning Environments
5.5. How would you rate in terms of efficiency your communication with the teachers? (1- very effective, 5-not effective at all) Mark the answer. 12345 5.6. How involved are you in the communication with your online teacher? (1- vey involved, 5- not involved at all) Mark the answer. 12345 5.7. What kind of techniques should a teacher use to ensure good interactivity in online courses and how often should the teacher use these techniques? (1-always, 2-often, 3-sometimes, 4-never). Put each number in a different cell. Feedback on the quality of learning Creative and open questions Team work Others (Please, specify them.)
5.8. How would you rate the quality of materials provided by teachers in relation to their requirements and assessments? (1- very good, 5- unsatisfactory) Mark the answer. 12345
Personal Comments If you want to develop any of the answers or to make some comments, please use the spaces below Proposals, suggestions: Personal information: Age: ____ Graduated faculty: _________________________________________________________________ Other completed training programs related to project management (if there are): __________________ _________________________________________________________________________________ Experience in project management (number of years): ____ Position in your organization: _______________________________________________________ Monthly income (below 1500 RON/month, over 1500 RON/month): ________________________ Daily activity in virtual environment, including the MIP programme (number of hours in average/ day): _____________________________________________________________________________ Favorite activities in virtual environment (list them): _____________________________________ _________________________________________________________________________________ Field of activity (IT, banking, commercial...): _____________________________________________ Other family members working in project management (yes/no): ____
194
195
Chapter 9
How to Design, Develop, and Deliver Successful E-Learning Initiatives Clyde Holsapple University of Kentucky, USA Anita Lee-Post University of Kentucky, USA
ABSTRACT The purposes of this chapter are three-fold: (1) to present findings in investigating the success factors for designing, developing and delivering e-learning initiatives, (2) to examine the applicability of Information Systems theories to study e-learning success, and (3) to demonstrate the usefulness of action research in furthering understanding of e-learning success. Inspired by issues and challenges experienced in developing an online course, a process approach for measuring and assessing e-learning success is advanced. This approach adopts an Information Systems perspective on e-learning success to address the question of how to guide the design, development, and delivery of successful e-learning initiatives. The validity and applicability of the process approach to measuring and assessing e-learning success is demonstrated in empirical studies involving cycles of action research. Merits of this approach are discussed, and its contributions in paving the way for further research opportunities are presented.
INTRODUCTION In the pursuit of teaching excellence, today’s educators are confronted with the challenge of how to successfully tap into the transforming power of the DOI: 10.4018/978-1-60960-615-2.ch009
Internet to facilitate or enable learning. As such, a primary objective of this chapter is to present findings from investigating the success factors in designing, developing, and delivering e-learning initiatives. An e-learning success model is introduced to serve not only as a measure of quality assurance in e-learning, but also as a strategy
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
How to Design, Develop, and Deliver Successful E-Learning Initiatives
for ensuring future success in the development and assessment of e-learning. The e-learning success model draws its theoretical basis from a user-centered Information Systems development paradigm. Consequently, a secondary objective of this chapter is to examine the applicability of Information Systems theories to study e-learning success. The validity of our e-learning success model is tested using an action research methodology. An iterative process of diagnosing, action planning, action taking, evaluating, and learning is repeated in manageable cycles following a continuous improvement principle to identify and address barriers to successful e-learning. As a result, a third objective of this chapter is to demonstrate the usefulness of action research in furthering understanding of e-learning success.
BACKGROUND According to the U.S. Department of Education (Parsad and Lewis, 2008), e-learning encompasses various distance education courses and programs including online, hybrid/blended online, and other distance learning courses. The inclusion of hybrid/ blended online courses as e-learning signifies the realization that learning can be extended beyond traditional in-class instruction with the mediation of learning technologies. Following this definition of e-learning, the U.S. Department of Education found that 96% of public 2-year and 86% of public 4-year institutions offered e-learning during the 2006-2007 academic year, with enrollments of 4,844,000 and 3,502,000 respectively. Of the total 2,720 institutions that offered e-learning, only 2% did not use Internet-based technologies at all for instructional delivery. These statistics reinforce the prevalence of Internet-based e-learning in higher education. As a result, we define e-learning as follows: E-learning is a formal education process in which the student and instructor are interacting via
196
Internet-based technologies at different locations and times. Riley et al., in their 2002 report to Congress on distance education programs, summed up the merits of e-learning precisely in this way: “the Internet, with its potential to expand the reach of higher education dramatically, presents very promising prospects to increase access to higher education and to enrich academic activity.” (Riley et al., 2002). Indeed, e-learning has often been touted as a means to revolutionize the traditional classroom lecture style of learning where knowledge is transmitted from teachers to students – the objectivist model of learning (Benbunah-Fich, 2002; Schank, 2001). This is to recognize that e-learning can do much more than just content transmission. It supports an alternative model of learning called constructivism where knowledge emerges through active learning – e-learning technologies are used for student-to-student(s) and instructor-to-student(s) interactions, and group work, creating a rich learning environment to engage students in more active learning tasks such as problem solving, concept development, exploration, experimentation, and discovery (Nunes and McPherson, 2003, Hardaway and Will, 1997). Hence, the pedagogical paradigm shift in how students learn requires concerted efforts from both the education and information technology fields to collectively chart a course for effective learning in the Internet Age. This chapter lays out the first step towards the pursuit of excellence in e-learning, sharing the same long-term goals as envisioned by a 16-member web-based education commission in their 2000 report (Web-based Education Commission, 2000): 1. To center learning around the student instead of the classroom 2. To focus on the strength and needs of individual learners 3. To make lifelong learning a practical reality
How to Design, Develop, and Deliver Successful E-Learning Initiatives
4. To empower every student 5. To elevate each individual to new levels of intellectual capacity and skill 6. To bring learning to students instead of students to learning The rest of the chapter is organized as follows. Past e-learning success research is reviewed. Our approach to e-learning success is then introduced. Findings from empirical studies to validate the proposed approach to e-learning are presented. The chapter concludes with a discussion on future directions for research and practice.
THE PURSUIT OF EXCELLENCE IN E-LEARNING Past Research Research on e-learning success is diverse – reflecting the complexity of the issue and the multi-disciplinary nature of the field. Education researchers tend to attribute learning success to its quality and focus on defining quality e-learning. The resulting studies are primarily guidelines or “best practices” of e-learning that are developed from case studies (Meyer, 2002; Byrne, 2002; Smith, 2001; Pittinsky & Chase, 2000; Lawhead et al., 1997). The most comprehensive guidelines are Pittinsky & Chase’s 24 benchmarks in seven areas: institutional support, course development, teaching/learning, course structure, student support, faculty support, and evaluation and assessment (Pittinsky & Chase, 2000). On the other hand, business researchers in general, and Information Systems researchers in particular, analyze e-learning success from a socio-technological system perspective – yielding empirical studies that attempt to explore a variety of factors and intervening variables that may have an impact on the success of e-learning. As shown in Table 1, a majority of the studies evaluate e-learning success on a single measure such
as learner satisfaction (Sun et al., 2008), course performance (Nemanich et al., 2009; Simmering et al., 2009; Santhanam et al., 2008; Hall et al., 2006), learning experience (Peltier et al., 2007; Williams et al, 2006), and system use (Sun et al., 2009; Davis & Wong, 2007; Saade, 2007; van der Rhee et al., 2007; Sadde & Bahli, 2005). However, factors affecting e-learning success are extensive, spanning learner characteristics (e.g., technology readiness, e-learning attitude, computer anxiety), instructor characteristics (e.g., e-learning attitude, teaching style, availability), and instructional design (e.g., learning model, content, interaction), as described in Piccoli et al.’s (2001) virtual learning environment framework. Despite the volume of research studies on elearning success, it is difficult to interrelate and unify the fragmented research activities undertaken in this area. Part of the problem lies in the ambiguity of the concept of e-learning success. Another problem is the seemingly vast array of factors impacting e-learning success. As a result, it is difficult to understand and isolate critical success factors of e-learning, as there is a lack of consensus about what constitutes success of elearning. To attain a greater benefit from the fragmented research efforts, there is a need to integrate and formulate a holistic and comprehensive model for evaluating e-learning initiatives. Another concern with the current state of elearning research is that e-learning researchers have a tendency to rely on literatures from their own respective disciplines with little cross-disciplinary sharing of ideas and findings. This issue was identified by Arbaugh et al. (2009) after they reviewed 182 articles on e-learning in business education from 2000 to 2008. Consequently, there is a need to forge a collaborative research between education and Information Systems fields. A third limitation of these studies is that success measures are derived from assessing the results of the development effort only. There is also a need to broaden the viewpoint of learning success from a result to a process perspective.
197
How to Design, Develop, and Deliver Successful E-Learning Initiatives
Table 1. Empirical studies of e-learning in business education since 2005 Study
Independent factor(s)
Dependent variable(s)
Nemanich et al. (2009)
Student learning ability
Course performance
Simmering et al. (2009)
Computer self-efficacy
Course grade
Sun et al. (2009)
System functionalities meeting the needs for instructional presentation, student learning management, and interactions
Instructor’s continue system usage
Johnson et al. (2008)
Computer self-efficacy Perceived usefulness
Course performance Course satisfaction Course instrumentality
Santhanam et al. (2008)
Instructional strategies that promote self regulatory learning
Test score
Sun et al. (2008)
Learner’s computer anxiety Instructor’s e-learning attitude Course flexibility Course quality Perceived usefulness Perceived ease of use Assessment diversity
Learner satisfaction
Wan et al. (2008)
Student’s virtual competence
Learning effectiveness Learner satisfaction
Arbaugh & Rau (2007)
Interaction
Student’s perceived learning
Course discipline Course structure
Student’s satisfaction with course delivery medium
Davis & Wong (2007)
Perceived usefulness flow experience
Learner’s system use
Peltier et al. (2007)
Teaching quality as measured by student-to-student interactions, student-to-instructor interactions, instructor support and mentoring, lecture delivery quality, course content, and course structure
Learning experience as measured by amount of learning, course enjoyment, and recommending to others
Saade (2007)
Perceived usefulness as measured by enhanced performance, enjoyment, and enhanced abilities
Learner’s system use
van der Rhee et al. (2007)
Student’s technology readiness
Student’s system use
Arbaugh & Benbunan-Fich (2006)
Objectivist teaching practices with collaborative learning techniques
Student’s perceived learning Student’s satisfaction with course delivery medium
Eom et al. (2006)
Course structure Instructor feedback Self-motivation Learning style Interaction Instructor facilitation
Learner satisfaction
Instructor feedback Learning style
Student’s perceived learning
Hall et al. (2006)
Student’s value belief Student’s personality
Task performance
Williams et al. (2006)
Team work Group cohesion
Learning experience
Sadde & Bahli (2005)
Perceived usefulness as impacted by cognitive absorption
Learner’s system use
198
How to Design, Develop, and Deliver Successful E-Learning Initiatives
The primary objective of this chapter is to address these three needs.
A Process Approach to E-Learning Success We advance a process approach for measuring and assessing e-learning success from an Information Systems perspective. The process approach to e-learning success stresses the importance of evaluating success not only on an outcome basis, but throughout the entire process of designing, developing, and delivering e-learning initiatives. The Information Systems perspective of e-learning success acknowledges the shared interest between the education and information Systems research in fostering a deeper understanding of the effectiveness of Internet-based e-learning. The validity of applying Information Systems theories to study e-learning success stems from the recognition that both efforts strive to better meet the needs of their constituencies by offering technological solutions. Furthermore, Information Systems researchers have been studying factors that account for the success of Information Systems since the early 1980s. Related theories and knowledge accumulated since then can be valuable in contributing to the understanding and pursuit of e-learning success. Consequently, we follow an Information System prototyping strategy for guiding the design, development, and delivery of e-learning initiatives, and for adapting an Information Systems success model to measure and evaluate factors that influence the success of e-learning throughout the entire process of e-learning systems development.
An Information Systems Prototyping Strategy Prototyping is an approach to developing Information Systems that involves a 4-step interactive process between the user and developer (Nauman and Jenkis, 1982). The user’s basic information requirements are identified in step 1 to allow a
working prototype to be developed quickly in step 2. The user is provided with hands-on use of the prototype in step 3 to identify undesirable or missing features. The prototype is revised and enhanced in step 4 to meet users’ acceptance. Information Systems literature recommends the use of prototyping for experimentation and learning before committing resources to develop a full-scale system – so as to clarify users’ information requirements, allow developers to understand the user’s environment, and promote understanding between the user and developer. (Alavi, 1984; Janson and Smith, 1985). Hardgrave et al. (1999) found that the relationship between prototyping and system success (as indicated by user satisfaction) was contingent on five environmental factors: project innovativeness, system impact, user participation, number of users, and developer experience with prototyping. Drawing on the similarities between e-learning and prototyping environments, prototyping is recommended as the strategy to develop a successful e-learning system. Specifically, a prototype e-learning module should be developed first with the intent of understanding students’ learning needs and attitudes toward e-learning, so that related issues and problems can be identified and addressed adequately before launching a full-scale development of the remaining modules of an online course.
An E-Learning Success Model The prototype e-learning module is critical in deciphering students’ learning needs and how those needs can be addressed in an e-learning environment. However, further investigation is required to evaluate whether or not the e-learning initiative is a success. It is for this purpose that we create an e-learning success model that provides specifics of what a successful e-learning initiative should be and how to design, develop, and deliver such an initiative. Our e-learning success model, as shown in Figure 1, is adapted from DeLone and McLean’s
199
How to Design, Develop, and Deliver Successful E-Learning Initiatives
Figure 1. E-learning success model
Information Systems success model (DeLone and McLean 2003). It follows a process approach for measuring and assessing success in positing that the overall success of e-learning rests on the attainment of success at each of the three stages of e-learning systems development: instructional design, course delivery, and learning outcome analysis. Success of the instructional design stage is evaluated along three success dimensions: system quality, information quality, and service quality. Success of the course delivery stage is evaluated along two success dimensions: use and user satisfaction. Finally, success of the learning outcome stage is evaluated along the net benefits dimension. The arrows shown in Figure 1 capture the interdependences among the three stages of development, as well as within the success dimensions of the course delivery stage. Essentially, success of the instructional design is vital to the success of course delivery, which, in turn, affects the success of learning outcomes. The success of
200
learning outcomes, on the other hand, has an impact on the success of subsequent course delivery, as indicated by the double arrow linking course delivery and learning outcome stages. Moreover, a course is successfully delivered if users continue to use the e-learning systems because they are already satisfied with the systems. Interdependences between the two success factors within the course delivery stage are depicted by the double arrow connecting use with user satisfaction. Our e-learning success model also includes metrics for each of the six success dimensions relevant to a typical e-learning environment. The system quality dimension measures the desirable characteristics of the course management system such as Blackboard or WebCT. A quality system should be easy-to-use, user friendly, stable, secure, fast, and responsive. The information quality dimension evaluates the course content on aspects of organization, presentation, length, clarity, usefulness, and currency. The service quality
How to Design, Develop, and Deliver Successful E-Learning Initiatives
dimension evaluates the essence of student-instructor interactions on attributes of promptness, responsiveness, fairness, competency, and availability. The use dimension measures the extent to which the course elements are used, including PowerPoint slides, audio clips, video clips, lecture scripts, discussion boards, case studies, illustrations, tutorials, and assignments. The user satisfaction dimension gauges the perceptions of students about their e-learning experiences which include satisfaction, enjoyment, success, and approval. The net benefits dimension captures both positive (learning enhancement, empowerment, time savings, and academic success) and negative (lack of face-to-face contact, social isolation, quality concerns, and dependency on technology) aspects of e-learning. The metrics provide a scale against which each of the six success dimensions within the three stages of e-learning system development can be measured by means of an instrument such as a survey. The assessment of the six success dimensions reveals deficiencies of the current e-learning systems and provides impetus for enhancement. It is this process of continuous assessment and improvement of the six success dimensions that leads to the attainment of excellence in e-learning.
Action Research Methodology Our process approach to e-learning success is tested through practical implementation using an action research methodology. Action research was introduced by Kurt Lewin in the 1940s to study social psychology and social changes at the University of Michigan’s Research Center for Group Dynamics (Lewin, 1947). Lewin’s work established the reputation of action research as a “science of practice” that is best suited for studying complex social systems by introducing changes into practice and observing the effects of these changes (Argyris et al., 1985). The fundamental contention of action research is that complex social systems cannot be reduced for meaningful
study. As a result, the goal of action research is to understand the complex process, rather than prescribing a universal law (Bakervielle, 1999). The complex nature of learning is apparent, as pointed out by Meyer (2002): The problem with most research studies on learning is the difficulty of isolating factors so that their impact (if any) can be identified and understood, separate from the action of other factors in the environment. Unfortunately for researchers, learning is both complex and occurs in very rich environments. It is doubly difficult to unravel influences from the individual’s personality, values, brain, background (family, school, friends, work), and of course, the educational environment (classroom, teacher, pedagogical choices, tools). (p.24) As a research methodology action research differs from case studies in that an action researcher is an active participant in experimenting with interventions that improve real-world practices. A researcher in a case study, on the other hand, is a passive observer of how practitioners solve real-world problems. Action research also differs from applied research in that the goal of action research is not only to report the success of a solution to a problem as in the case of applied research, but also to evaluate the solution process so as to further the theoretical underpinning of practices (Chiasson et al, 2008; Avison et al., 1999; Robinson, 1993). The scientific rigor of action research lies in its ability to advance actionable explanations to real-world phenomena, thereby achieving both formative validity (the theory-building process is valid) and summative validity (the theory is able to withstand empirical testing) (Lee and Hubona, 2009). Consequently, we adopt an action research methodology to (1) understand barriers to successful e-learning, (2) improve e-learning practices, and (3) validate the new process approach to e-learning success. Our action research methodology involves five iterative phases, namely, diagnosing, action-
201
How to Design, Develop, and Deliver Successful E-Learning Initiatives
Figure 2. The five phases of action research
planning, action-taking, evaluating, and learning (Susan and Evered, 1978). The diagnosing phase identifies barriers to successful e-learning, so that e-learning interventions to overcome these barriers can be developed in the action-planning phase. The action-taking phase then carries out the e-learning interventions developed. The evaluating phase examines resulting impacts on e-learning success from the actions taken. The learning phase assimilates lessons learned and experiences gained towards a better understanding of e-learning success. These five phases of action research are shown in Figure 2.
EMPIRICAL STUDIES The validity and applicability of our process approach to measuring and assessing e-learning success has been demonstrated in empirical studies involving cycles of action research (Holsapple and Lee-Post, 2006; Lee-Post, 2007; Lee-Post, 2009). In these studies, the first two cycles of
202
action research served to validate the importance of prototyping as a strategy for e-learning systems development. A prototype module on facility location was developed using animated PowerPoint lectures, audio clips, lecture scripts, and tutorials to present the course content. Students from a quantitative methods course then used this module in an e-learning system on Blackboard to learn the concepts of facility location decisions, key factors in location analysis, technical methods for location evaluation, and quantitative models for screening location alternatives. A course feedback survey was used at the end of both cycles to evaluate the success of the prototype module as well as students’ attitudes towards e-learning. The remaining two cycles of action research were conducted to investigate the usefulness of the e-learning success model. The rest of the elearning modules for the quantitative methods course were developed to allow us to offer the course entirely online. Students who are onlineready were admitted into the online section of the course. A course satisfaction survey was used at
How to Design, Develop, and Deliver Successful E-Learning Initiatives
the end of each cycle to measure all six success dimensions of the online course. Each success dimension was quantified as a numeric measure from ratings obtained via the survey. A low rating for any success dimension signified a deficiency in that area and efforts were then devoted to rectify the deficiency. As cycles of action research were conducted to raise the ratings of the six success dimensions, the resulting lessons and experience converged to a fuller understanding of the impediments to e-learning success and how these impediments could be addressed. Figure 3 shows an overview of the five phases within each of the four cycles of action research. A detailed description of each of the four cycles of action research is provided in the following sections.
First Cycle of Action Research The first cycle of action research began upon the approval of a proposal to develop an online quantitative methods course in operations management for business undergraduates. In diagnosing the impediments to e-learning success at this early stage of the development process, the current understanding and practice of e-learning development were critically examined in light of Information Systems development theories. The theory-practice gap was found to be a lack of full understanding about students’ learning needs and attitudes towards e-learning. As a result, prototyping was identified as the most appropriate strategy to ascertain students’ learning needs and how e-learning could be used to meet those needs. In the action-planning phase, a prototype e-learning module on one topic of study was planned. By following the guiding principles laid out in Pennsylvania State University’s report (IDE, 1998), an instructional design plan was created to ensure that the module’s learning objectives, content, activities, outcome assessments, and instructional technologies were put together in such a way as to enhance learning. In addition,
a course feedback survey was to be designed to measure the success of the e-learning module. In the action-taking phase, an e-learning module on facility location was developed. Students in the quantitative methods course learned about facility location asynchronously by assessing the e-learning module on Blackboard anytime within a 2-week period instead of meeting face-to-face in class. Content materials were presented using various media so that students could watch an animated PowerPoint show, listen to an audio clip, or read the lecture script at their own choosing. Learning activities included discussion questions intended for student-to-student interactions and case studies for real-world application of materials learned. A course feedback survey was used to collect students’ background information and their opinions towards e-learning upon the completion of the e-learning module. A copy of the feedback survey can be found in the Appendix. Results of the survey were analyzed in the evaluation phase. Descriptive statistics computed from quantitative responses indicated that our typical student had a fairly high GPA (3.7), a current B standing in class, a course load of four or more, an above average digital literacy, but negative before and after opinions of e-learning (both were the same 2.8 on a 5-point scale). Students preferred reading the lecture script and doing case studies, rather than discussion questions. Course materials were rated highest in terms of organization (3.7 on a 5-point scale). “Control when and where to learn” was the most valuable (4.1 on a 5-point scale) e-learning benefit, followed by “Learn materials in less time” (3.3 on a 5-point scale). The qualitative responses indicated that students desired more control over content delivery (e.g., able to download PowerPoint slides on their own computers), more interactions (e.g., live chat), and more illustrative examples. In summary, the e-learning module was not well received as evident from students’ low rating (2.7 on a 5-point scale) on both the user satisfaction and net benefits dimensions.
203
How to Design, Develop, and Deliver Successful E-Learning Initiatives
Figure 3. Cycles of action research
204
How to Design, Develop, and Deliver Successful E-Learning Initiatives
Reflecting upon the experience in designing, developing, and delivering the e-learning module in the learning phase, the following merits of elearning were supported: (1) autonomy – able to control when and where to learn; (2) efficiency -- able to learn materials in less time; (3) flexibility – able to tailor learning to suit individual learner’s learning style; (4) accessibility – able to extend learning beyond the traditional classroom setting. However, we were also faced with the challenge of a seemingly unsuccessful e-learning module. In particular, students were apprehensive that e-learning was not able to meet many of their learning needs, including “address questions and concerns”, “voice opinion and viewpoints”, and “stimulate interest in the subject,” to name a few.
cycle. Students’ after opinion toward e-learning was 15% more favorable than their before opinion. With the exception of the use dimension, the information quality, user satisfaction, and net benefits dimensions all experienced statistically significant improvement over the first cycle. It was found that students’ attitudes towards e-learning could be positively influenced by informing them of the benefits of e-learning and giving them opportunities to experience those benefits first hand. Such positive attitude towards e-learning played an important role in defining the appropriate set of learning needs that e-learning could impact most. As such, a positive attitude towards e-learning should be viewed as an exogenous factor to the e-learning success model.
Second Cycle of Action Research
Third Cycle of Action Research
The second cycle of action research began with diagnosing the problem of a lack of enthusiastic reception for e-learning, as students’ before and after opinions of e-learning were the same low rating of 2.8 out of 5. This indifferent attitude presented a key barrier to successful e-learning that needed to be addressed right away. Scrutinizing the e-learning module from the viewpoint of the e-learning success model in the action-planning phase, the instructional design plan was revised with a goal of improving the module’s user satisfaction and net benefits success dimensions. In the action-taking phase, design changes were made to the e-learning module to enhance its information quality by using more examples to explain the problem-solving process involved in making facility location decisions. In addition, the benefits of e-learning were impressed upon students at the beginning of the quantitative methods course to prepare them to experience these benefits first hand with the e-learning module on facility location. The same course feedback survey was used at the conclusion of the facility location module. In the evaluation phase, the survey results were analyzed and compared with those from the first
Subsequent cycles of action research began with designing the remaining topics of study of the quantitative methods course for online delivery. The e-learning prototype served as a blueprint for the online course design and development. To continue the success assessment of the online course, a course satisfaction survey was designed by expanding the course feedback survey to include questions that measured the remaining two success dimensions in the system design stage: system quality and service quality. To augment the satisfaction survey’s validity, an independently designed course evaluation survey administered by the institution’s Distance Learning Technology Center was also used to measure success of the online course along the six success dimensions. In the diagnosing phase of the third cycle, the realization that e-learning was not for everyone and the lack of opportunity to influence students’ e-learning attitude before they registered for the online course led us to conclude that the failure to discern a student’s e-learning attitude and needs would be an impediment to e-learning. Consequently, a screening process was planned in the action-planning phase to accept students into
205
How to Design, Develop, and Deliver Successful E-Learning Initiatives
the online course only if they were online-ready. A survey was designed to evaluate a student’s online readiness on four dimensions: academic readiness, technical readiness, lifestyle readiness, and e-learning readiness. Students were considered online ready if they received at least a B in all of the course pre-requisites and responded near the high end of the 5-point scale on questions related to technical, lifestyle, and e-learning readiness. In addition, a course expectation survey was designed to understand students’ learning needs and experience. In the action-taking phase, students were asked to fill out four surveys at different times: (1) the online readiness survey before being allowed to enroll into the online course; (2) the course expectation survey at the beginning of the semester; (3) the course evaluation survey toward the end of the semester, and (4) the course satisfaction survey upon completion of the online course. Please refer to the Appendix for a copy of each of these four surveys. A database was created containing the following data about the students: (1) demographics; (2) online readiness; (3) learning needs and experiences; (4) opinions towards the online course’s design, delivery, and impacts; (5) learning outcomes. In the evaluation phase, results from both the course satisfaction and course evaluation surveys were analyzed. To evaluate the success of the online course, an overall rating for each success dimension was computed by averaging all respondents’ ratings on items of the survey related to the specific dimension. Based on the fact that none of the six success dimensions had an average rating below 3 on a 4-point or 5-point scale, the online course was considered to be well-received. In the learning phase, the favorable reception of the online course rendered support to the use of the screening process to circumvent students’ e-learning attitude concerns. In particular, students’ online readiness was a pre-requisite for e-learning success.
206
Fourth Cycle of Action Research The fourth cycle of action research began with the goal of raising the average ratings for all six success dimensions, in particular the use and net benefits dimension as they were the two that had received the lowest average ratings. In the diagnosing phase, a close examination of the survey items corresponding to the use construct revealed that discussion questions were rated as ineffective (2.89 on a 4-point scale) in contributing to students’ understanding of the course content. This was confirmed by the fact that “Voice my opinion and concerns” was the lowest rated item corresponding to the net benefit construct. The current instructional design plan was then evaluated against the seven principles of effective teaching (Graham et al., 2001) to uncover underlying reasons for the ineffective practices. As such, the barrier to successful e-learning was identified as the unsatisfied need for interactions in learning. In the action-planning phase, learning interventions were designed to promote interactions in learning from three fronts: (1) student-to-content interactions such as interactive spreadsheets and interactive problem solving programs; (2) studentto-instructor interactions such as virtual office hours and a 48-hour maximum email response rate; and (3) student-to-student interactions such as study groups and threaded discussions. Learning interactions enhancements were implemented in the action-taking phase. Results from both the course satisfaction and evaluation surveys were analyzed in the evaluation phase. The average ratings for all six success dimensions were raised. It was learned that e-learning success was a process of continuous pursuit of excellence along the dimensions of system quality, information quality, service quality, use, user satisfaction, and net benefits.
How to Design, Develop, and Deliver Successful E-Learning Initiatives
Findings The first two cycles of action research confirmed the utility of a prototyping strategy for e-learning systems development. Not only did the prototype e-learning module help identify “control where and when to learn” and “learn materials in less time” as the merits of e-learning from the students’ perspective, it also served as a blueprint for the development of the rest of the course topics to ensure learning flexibility, efficiency and autonomy. As such, course materials in each topic of the course were designed in such a way that they were delivered in multiple media formats (animation, audio clips, lecture scripts, PowerPoint slides) to suit different students’ learning styles (visual, aural, read/write, and kinesthetic (Fleming and Mills, 1992)). The subsequent cycles of action research, on the other hand, confirmed the utility of our e-learning success model. The model provides solid specifications of what a successful e-learning initiative should be and gives guidance to how to achieve success in e-learning undertakings. Essentially, a successful e-learning initiative is one rated highly by its users on six dimensions: system quality, information quality, service quality, use, user satisfaction, and net benefits. To ensure success in e-learning undertakings, a process approach should be followed to continuously improve the ratings of all six success dimensions. In other words, attempts should be made to raise the system, information, and service quality ratings in the design stage first, before proceeding to increase the use and user satisfaction ratings in the system delivery stage. The effectiveness of the design and delivery improvements is then monitored in the outcome analysis stage by following through with measures to raise the net benefits rating. Our empirical studies provide examples of effective enhancements made in each of the three stages of e-learning systems development. Instructional design improvements can be as simple as correcting errors in the course materials promptly,
or adhering to a 48-hour maximum email response window. They can also involve more extensive overhaul such as incorporating interactive learning contents and activities, or conducting virtual office hours via videoconferencing. The specific instructional design improvements are then followed through in the course delivery stage. For example, one way to raise the use and user satisfaction dimension with more responsive emails is to include constructive comments and timely feedback in email responses to students’ questions about the course content, directing them to revisit a particular helpful course element. After specific instructional design and course delivery improvements are implemented, the associated net benefits can be monitored in the learning outcome analysis stage. Measures to raise the net benefits dimension can be an affirmation of improved class performance from higher quality instructor-student interactions. Our research reveals that a major barrier to e-learning success is students’ lack of an enthusiastic reception for e-learning. Indeed, as noted by Jones and Rainie (2002): One important unresolved question of how much today’s student will rely on online tools to advance their skills and polish their academic credentials. … Their current behaviors show them using the Internet as an educational tool supplementing traditional classroom education, and it may be difficult to convince them to abandon the traditional setting after they have the kinds of attention afforded them in the college classroom. (pg.19) Students’ indifferent attitudes toward e-learning lead us to recognize that e-learning is not for everybody. As a result, to ensure student success in e-learning, we use an online readiness survey to identify those who are online ready before we accept them into the online course. The purpose of the survey is to evaluate whether or not a student is online ready based on four dimensions: academic preparedness, technical competence,
207
How to Design, Develop, and Deliver Successful E-Learning Initiatives
lifestyle aptitude, and e-learning preference. A student is online-ready if he/she scores highly on all four readiness dimensions. Another impediment to successful e-learning was found to be the failure to satisfy students’ learning needs, in particular, their need for meaningful interactions in learning. The virtual nature of e-learning presents a real challenge in meeting such needs, especially ones that require a human touch and/or physical experience. Currently we adopt a blended approach to address such needs. With the advance of Web 2.0 technologies, notably Web conferencing, Wiki, virtual reality, and social networking, the need for physical presence to interact may be diminishing. More importantly, not only can these educational technologies be used to support student-to-content, student-toinstructor, student-to-student instructions, but they can also be used to create communities of learners and collaborators, provide instant access to people and resources, and simulate real-world problem solving. Therefore, it is more important than ever, for us as educators, to be cognizant with developments in e-learning research.
FUTURE DIRECTIONS Our approach for measuring and assessing elearning success, from an Information System perspective, helps researchers and practitioners to better understand the elements of what constitutes this success. It also provides a holistic and relatively comprehensive model of how success in e-learning can be defined, evaluated, and promoted. However, there is still a need to broaden both the depth and breadth of e-learning research. In addition, inter-disciplinary research encompassing education and Information Systems needs to be forged to move e-learning research forward so that the educational promises of the Internet can be more fully realized.
208
The depth of e-learning research can be extended to investigate the longer-term impacts of e-learning beyond its immediate benefits of learning flexibility and efficiency. The role of technology on other aspects of learning improvements should be explored. As such, studies examining the cognitive and pedagogical implications of Internet technologies are greatly needed. Specifically, how to realize the full potential of the Internet to promote high-order thinking and learning, elevate intellectual capacity and intelligence level, and support a constructivist learning goal should be the next step in the research agenda of e-learning. The breadth of e-learning research can be extended to consider the perspectives of other stakeholders in e-learning. Our current studentcentered focus helps steer research attention away from an emphasis on technology in understanding e-learning. Recognition should also be given to the roles that instructors and the institution play in making e-learning a success. Specifically, further studies are needed to help institutions make wise e-learning decisions such as technology infrastructure investment, technology training and support, e-learning developmental and pedagogical aids, to name a few. Studies are also needed on the extent to which it is possible to convert instructors who are e-learning skeptics to e-learning adopters, and how to do so. Our current work exemplifies the mutual benefits of inter-disciplinary research. On the educational research front, a deeper understanding of e-learning is fostered. On the Information Systems research front, a more comprehensive view of a successful deployment of information technology is garnered. Further collaboration and sharing of what is known from each research base are needed before progress can be made in broadening the depth and breadth of e-learning research.
How to Design, Develop, and Deliver Successful E-Learning Initiatives
REFERENCES Alavi, M. (1984). An assessment of the prototyping approach to Information Systems development. Communications of the ACM, 27(6), 556–563. doi:10.1145/358080.358095 Arbaugh, J. B., & Benbunan-Fich, R. (2006). An investigation of epistemological and social dimensions of teaching in online learning environments. Academy of Management Learning & Education, 5, 435–447. Arbaugh, J. B., Godfrey, M. R., Johnson, M., Pollack, B. L., Niendorf, B., & Wresch, W. (2009). Research in online and blended learning in the business disciplines: Key findings and possible future directions. The Internet and Higher Education, 12, 71–87. doi:10.1016/j.iheduc.2009.06.006 Arbaugh, J. B., & Rau, B. L. (2007). A study of disciplinary, structural, and behavioral effects on course outcomes in online MBA courses. Decision Sciences Journal of Innovative Education, 5(1), 65–94. doi:10.1111/j.1540-4609.2007.00128.x Argyris, C., Putname, R., & Smith, D. (1985). Action science: Concepts, methods and skills for research and intervention. San Francisco, CA: Joessey-Bass. Avison, D., Lau, F., Myers, M., & Nielsen, P. A. (1999). Action research. Communications of the ACM, 42(1), 94–97. doi:10.1145/291469.291479 Bakerville, R. L. (1999). Investigating Information Systems with action research. Communications of the Association for Information Systems, 2(19). Benbunan-Fich, R. (2002). Improving education and training with IT. Communications of the ACM, 45(6), 94–99. doi:10.1145/508448.508454 Byrne, R. (2002). Web-based learning versus traditional management development methods. Singapore Management Review, 24(2), 59–68.
Chiasson, M., Germonprez, M., & Mathiassen, L. (2008). Pluralist action research: A review of the Information Systems literature. Information Systems Journal, 19, 31–54. doi:10.1111/j.13652575.2008.00297.x Davis, R., & Wong, D. (2007). Conceptualizing and measuring the optimal experience of the e-learning environment. Decision Sciences Journal of Innovative Education, 5(1), 97–126. doi:10.1111/j.1540-4609.2007.00129.x DeLone, W. H., & McLean, E. R. (2003). The DeLone and McLean Model of Information Systems success: A ten year update. Journal of Management Information Systems, 19(4), 9–30. Eom, S. B., Ashill, N., & Wen, H. J. (2006). The determinants of students’ perceived learning outcome and satisfaction in university online education: An empirical investigation. Decision Sciences Journal of Innovative Education, 4(2), 215–236. doi:10.1111/j.1540-4609.2006.00114.x Fleming, N. D., & Mills, C. (1992). Not another inventory, rather a catalyst for reflection. To Improve the Academy, 11, 137-155. Graham, C., Cagiltay, K., Lim, B. R., Craner, J., & Duffy, T. M. (2001). Seven principles of effective feaching: A practical lens for evaluating online courses. Technology Source. March/April. Hall, D. J., Cegielski, C. G., & Wade, J. N. (2006). Theoretical value belief, cognitive ability, and personality as predictors of student performance in object-oriented programming environments. Decision Sciences Journal of Innovative Education, 4(2), 237–257. doi:10.1111/j.15404609.2006.00115.x Hardaway, D., & Will, R. P. (1997). Digital multimedia offers key to educational reform. Communications of the ACM, 40(4), 90–96. doi:10.1145/248448.248463
209
How to Design, Develop, and Deliver Successful E-Learning Initiatives
Hardgrave, B. C., Wilson, R. L., & Eastman, K. (1999). Toward a contingency model for selecting an Information System prototyping strategy. Journal of Management Information Systems, 16(2), 113–136. Holsapple, C. W., & Lee-Post, A. (2006). Defining, assessing, and promoting e-learning success: An Information Systems perspective. Decision Sciences Journal of Innovative Education, 4(1), 67–85. doi:10.1111/j.1540-4609.2006.00102.x IDE. (1998). An emerging set of guiding principles and practices for the design and development of distance education. Innovations in Distance Education, Penn State University. Janson, M. A., & Smith, L. D. (1985). Prototyping for systems development: A critical appraisal. Management Information Systems Quarterly, (December): 305–316. doi:10.2307/249231 Johnson, R. D., Hornik, S., & Salas, E. (2008). An empirical examination of factors contributing to the creation of successful e-learning environments. International Journal of HumanComputer Studies, 66(5), 359–369. doi:10.1016/j. ijhcs.2007.11.003 Jones, S., & Rainie, L. (2002). The Internet goes to college. Washington, D.C.: Pew Internet and American Life Project. Lee, A. S., & Hubona, G. S. (2009). A scientific basis for rigor in Information Systems research. Management Information Systems Quarterly, 33(2), 237–262. Lee-Post, A. (2007). Success factors in developing and delivering online courses in operations management. International Journal of Information and Operations Management Education, 2(2), 131–139. doi:10.1504/IJIOME.2007.015279 Lee-Post, A. (2009). E-learning success model: An Information Systems perspective. Electronic Journal of eLearning, 7(1), 61-70.
210
Lewin, K. (1947). Frontiers in group dynamics II. Human Relations, 1, 143–153. doi:10.1177/001872674700100201 Meyer, K. A. (2002). Quality in distance education: Focus on online learning. Hoboken, NJ: Wiley Periodicals, Inc. Nauman, J. D., & Jenkins, A. M. (1982). Prototyping: The new paradigm for systems development. Management Information Systems Quarterly, (September): 29–44. doi:10.2307/248654 Nemanich, L., Banks, M., & Vera, D. (2009). Enhancing knowledge transfer in classroom versus online settings: The interplay among instructor, student, content, and context. Decision Sciences Journal of Innovative Education, 7(1), 123–148. doi:10.1111/j.1540-4609.2008.00208.x Nunes, M. B., & McPherson, M. (2003). Constructivism vs objectivism: Where is the difference for designers of e-learning environments? Proceedings of the 3rd IEEE International Conference on Advanced Learning Technologies. Parsad, B., & Lewis, L. (2008). Distance education at degree-granting postsecondary institutions: 2006–07 (NCES 2009–044). National Center for Education Statistics, Institute of Education Sciences, U. Washington, DC: S. Department of Education. Peltier, J. W., Schibrowsky, J. A., & Drago, W. (2007). The interdependence of the factors influencing the perceived quality of the online learning experience: A causal model. Journal of Marketing Education, 29(2), 40–153. doi:10.1177/0273475307302016 Piccoli, G., Ahmad, R., & Ives, B. (2001). Webbased virtual learning environments: A research framework and a preliminary assessment of effectiveness in basic IT skills training. Management Information Systems Quarterly, 25(4), 401–426. doi:10.2307/3250989
How to Design, Develop, and Deliver Successful E-Learning Initiatives
Pittinsky, M., & Chase, B. (2000). Quality on the line: Benchmarks for success in Internet-based distance education. Washington, D.C.: The Institute for Higher Education Policy, National Education Association.
Smith, L. J. (2001). Content and delivery: A comparison and contrast of electronic and traditional MBA marketing planning courses. Journal of Marketing Education, 23(1), 35–44. doi:10.1177/0273475301231005
Riley, R. W., Fritschler, A. L., & McLaughlin, M. A. (2002). Report to Congress on the distance education demonstration programs. U.S. Department of Education, Office of Postsecondary Education, Policy, Planning, and Innovation, Washington, D.C.
Sun, P. C., Cheng, H. K., & Finger, G. (2009). Critical functionalities of a successful e-learning system: An analysis from instructors’ cognitive structure toward system usage. Decision Support Systems, 48, 293–302. doi:10.1016/j. dss.2009.08.007
Robinson, V. M. J. (1993). Current controversies in action research. Public Administration Quarterly, 17(3), 263–290.
Sun, P. C., Tsai, R. J., Finger, G., Chen, Y. Y., & Yeh, D. (2008). What drives a successful e-learning? An empirical investigation of the critical success factors influencing learner satisfaction. Computers & Education, 50, 1183–1202. doi:10.1016/j. compedu.2006.11.007
Saad’e, R. G. (2007). Dimensions of perceived usefulness: Towards enhanced assessment. Decision Sciences Journal of Innovative Education, 5(2), 289–310. doi:10.1111/j.1540-4609.2007.00142.x Saad’e, R. G., & Bahli, B. (2005). The impact of cognitive absorption on perceived usefulness and perceived ease of use in online learning: An extension of the technology acceptance model. Information & Management, 42, 317–327. doi:10.1016/j. im.2003.12.013 Santhanam, R., Sasidharan, S., & Webster, J. (2008). Using self-regulatory learning to enhance e-learning-based Information Technology training. Information Systems Research, 19(1), 26–47. doi:10.1287/isre.1070.0141 Schank, R. C. (2001). Revolutionizing the traditional classroom course. Communications of the ACM, 44(12), 21–2408. doi:10.1145/501317.501330 Simmering, M. J., Posey, C., & Piccoli, G. (2009). Computer self-efficacy and motivation to learn in a self-directed online course. Decision Sciences Journal of Innovative Education, 7(1), 99–121. doi:10.1111/j.1540-4609.2008.00207.x
Teh, G. P. L. (1999). Assessing student perceptions of Internet-based online learning environment. International Journal of Instructional Media, 26(4), 397–402. van der Rhee, B., Verma, R., Plaschka, G. R., & Kickul, J. R. (2007). Technology readiness, learning goals, and e-learning: Searching for synergy. Decision Sciences Journal of Innovative Education, 5(1), 127–149. doi:10.1111/j.15404609.2007.00130.x Wan, Z., Wang, Y., & Haggerty, N. (2008). Why people benefit from e-learning differently: The effects of psychological processes on e-learning outcomes. Information & Management, 45(8), 513–521. doi:10.1016/j.im.2008.08.003 Web-based Education Commission. (2000). The power of the Internet for learning: Moving from promise to practice. Washington, D.C. Retrieved from http://www.hpcnet.org/ webcommission
211
How to Design, Develop, and Deliver Successful E-Learning Initiatives
Williams, E. A., Duray, R., & Reddy, V. (2006). Teamwork orientation, group cohesiveness, and student learning: A study of the use of teams in online distance education. Journal of Management Education, 30, 592–616. doi:10.1177/1052562905276740
KEY TERMS AND DEFINITIONS Action Research: an iterative process of diagnosing, action planning, action taking, evaluating and learning to study complex social systems. Constructivist Learning: knowledge emerges through active learning. E-Learning: a formal education process in which the student and instructor are interacting via Internet-based technologies at different locations.
212
Hybrid/Blended Online Courses: learning is extended beyond in-class instruction with the mediation of learning technologies. Information Quality: the desirable characteristics of the course content. Objectivist Learning: knowledge is transferred from teachers to students. Prototyping: an approach to developing information systems whereby a user is provided with a hand-on use of a mock-up system for experimentation before a full-scale system is developed. Service Quality: the desirable characteristics of student-instructor interactions. System Quality: the desirable characteristics of a course management system such as Blackboard or WebCT.
How to Design, Develop, and Deliver Successful E-Learning Initiatives
APPENDIX A. THE COURSE FEEDBACK SURVEY PART I. Student Information 1. What is your current GPA? <2.0
2.0 to 2.5
2.6 to 3.0
3.0 to 3.5
3.6 to 4.0
C
D
E
4
>4
2. What is your current grade in this course? A
B
3. How many courses are you taking this semester? 1
2
3
4. Have you participated in an Internet-based course before? [ ]-Yes [ ]-No 5. Have you used Blackboard before? [ ]-Yes [ ]-No 6. How often do you use the Internet in your coursework? Never
Seldom
Sometimes
Usually
Always
4
Excellent-5
4
Excellent-5
7. How do you rate your knowledge of using the computer? Poor-1
2
3
8. How do you rate your knowledge of using the Internet? Poor-1
2
3
9. What is your opinion towards distance-learning before taking this course? Negative-1
2
3
4
Positive-5
213
How to Design, Develop, and Deliver Successful E-Learning Initiatives
10. What is your opinion towards distance-learning after taking this course? Negative-1
2
3
4
Positive-5
PART II. Learning Evaluation 1. The following have been valuable to you in your learning experience: Strongly Disagree
Strongly Agree
PowerPoint slides
Not Used
1
2
3
4
5
Audio to accompany the slides
Not Used
1
2
3
4
5
Script to accompany the slides
Not Used
1
2
3
4
5
Discussion board questions
Not Used
1
2
3
4
5
Case studies
Not Used
1
2
3
4
5
2. Please evaluate the course material section of the course: Strongly Disagree
Strongly Agree
Materials are well organized
1
2
3
4
5
Materials are effectively presented
1
2
3
4
5
Materials are the right length
1
2
3
4
5
Materials are clearly written
1
2
3
4
5
3. Compared to the traditional classroom format, the web-based delivery of the course better enables you to: Strongly Disagree
Strongly Agree
Be actively involved in the learning process
1
2
3
4
5
Address my questions and concerns
1
2
3
4
5
Voice my opinion and viewpoints
1
2
3
4
5
Understand the course materials
1
2
3
4
5
Stimulate my interest in the subject
1
2
3
4
5
Relate the subject matter to other areas
1
2
3
4
5
Put effort into non-assessed work
1
2
3
4
5
Control when and where to learn
1
2
3
4
5
Learn the materials in less time
1
2
3
4
5
Complete the assignments in less time
1
2
3
4
5
Use written communication in learning
1
2
3
4
5
214
How to Design, Develop, and Deliver Successful E-Learning Initiatives
4. 5. 6. 7. 8. 9.
How could the web-based delivery of the course be improved? What do you like the most about the web-based format of the course? What do you like the least about the web-based format of the course? What elements of the subject have you found most difficult to master on the web? How could the instructor make these subjects more easily understandable on the web? Do you have any other comments, questions, or feedback?
APPENDIX B. THE ONLINE READINESS SURVEY PART I. Student Information 1. What is your current GPA? <2.0
2.5
3.0
3.5
3
4
4.0
2. How many courses are you planning to take? 1
2
>4
3. Are you an upper division business major?
[ ]Yes [ ]No -- If No, did you file a request form in BE445? [ ]Yes [ ]No 4. What grade did you earn for the following prerequisites? [ ]CS101 or Microsoft Certification [ ]ACC202 [ ]ECO201 [ ]STA291 [ ]MA113 or MA123, 162 5. Have you participated in an Internet-based course before? [ ]Yes [ ]No 6. Your primary reason for taking the course online is ____________________________________ ____________
PART II. Technical Readiness These questions are designed to help you assess your readiness for participating in online courses, based on your assessment of your computer setup and technical literacy. On a scale of 1 to 5 rate yourself
215
How to Design, Develop, and Deliver Successful E-Learning Initiatives
according to each of the following statements; 5 indicating the greatest agreement and 1 indicating the least agreement. Be as forthright as possible with your responses. 1. I routinely use Microsoft Office tools on a computer. 1
2
3
4
5
4
5
4
5
2. I know how to access the technical support desk. 1
2
3
3. My computer setup is sufficient for online learning. 1
2
3
4. I have the following pieces of software downloaded on my computer: ◦⊦ Microsoft Office tools such as Word, Excel, and PowerPoint ◦⊦ Real Player Basic ◦⊦ Adobe Acrobat Reader ◦⊦ Macromedia Shockwave Player ◦⊦ FireFox or Internet Explorer 1
2
3
4
5
3
4
5
4
5
5. I have access to a printer. 1
2
6. I have at least a 28.8 speed modem connection. 1
216
2
3
How to Design, Develop, and Deliver Successful E-Learning Initiatives
7. I have access to a dedicated network connection or to a telephone line that can be given over to Internet use for substantial periods of time, perhaps 45 minutes or so, at least 3 times a week. 1
2
3
4
5
8. I have access to a dedicated network connection or have a local or national Internet Service Provider/ ISP (a company to whom you pay a monthly fee to obtain a connection through a phone line and modem to the Internet.). 1
2
3
4
5
PART III. Lifestyle Readiness These questions are designed to help you assess your readiness for participating in online courses, based on your assessment of your lifestyle readiness. On a scale of 1 to 5, rate yourself according to each of the six following statements; 5 indicating the greatest agreement and 1 indicating the least agreement. Be as forthright as possible with your responses. 1. I have a private place in my home or at work near my computer, that is “mine,” and that I can use for extended periods. 1
2
3
4
5
2. I can put together blocks of time that will be uninterrupted in which I can work on my online courses (a block of time might be something like 90 minutes up to two hours, several times a week, depending upon the number of course credits you are taking online). 1
2
3
4
5
3. I routinely communicate with persons by using electronic technologies such as e-mail and voice mail. 1
2
3
4
5
217
How to Design, Develop, and Deliver Successful E-Learning Initiatives
4. I have persons and/or resources nearby who will assist me with any technical problems I might have with my software applications as well as my computer hardware. 1
2
3
4
5
5. I value and/or need flexibility. For example, it is not convenient for me to come to campus two to three times a week to attend a place and time-based traditional class. 1
2
3
4
5
PART IV. Learning Preferences These questions are designed to help you assess your readiness for participating in online courses, based on your assessment of how you learn best. On a scale of 1 to 5 rate yourself according to each of the six following statements; 5 indicating the greatest agreement and 1 indicating the least agreement. Be as forthright as possible with your responses. 1. When I am asked to use technologies that are new to me such as a fax machine, voice mail or a new piece of software, I am eager to try them. 1
2
3
4
5
3
4
5
2. I am a self-motivated, independent learner. 1
2
3. It is not necessary that I be in a traditional classroom environment in order to learn. 1
2
3
4
5
4. I am comfortable waiting for written feedback from the instructor regarding my performance, rather than receiving immediate verbal feedback. 1
218
2
3
4
5
How to Design, Develop, and Deliver Successful E-Learning Initiatives
5. I am proactive with tasks; tending to complete them well in advance of deadlines. 1
2
3
4
5
4
5
3.5
4.0
4
5 or more
3
4 or more
6. I communicate effectively and comfortably in writing. 1
2
3
APPENDIX C. THE COURSE EXPECTATION SURVEY PART I. Student Information 1. What is your current GPA? <2.0
2.5
3.0
2. How many courses are you taking this semester? 1
2
3
3. How many Internet-based courses have you taken? 0
4. 5. 6. 7.
1
2
Are you a business major? [ ]-Yes [ ]-No, my major is: _____________________ Are you working? [ ]-No [ ]-Yes, I am working as a: _______________ [ ] hours per week What is your gender? [ ]-Male [ ]-Female What is your student classification? Freshman
Sophomore
Junior
Senior
Graduate
8. What is your main reason for taking this course? [ ]-Required by University Studies
[ ]-Required by my major
[ ]-Other (e.g., elective)
219
How to Design, Develop, and Deliver Successful E-Learning Initiatives
9. How many hours per week are you planning to spend on this course (excluding class time)? <1
2
3
4-5
6-7
8 or more
10. Do you have any physical disability? [ ]-Yes [ ]-No 11. What grade do you expect to earn in this course? E/Fail
D
C
B
A
Usually
Always
4
Excellent-5
4
Excellent-5
4
Positive-5
12. How often do you use the Internet in your coursework? Never
Seldom
Sometimes
13. How would you rate your knowledge of computer use? Poor-1
2
3
14. How would you rate your knowledge of Internet use? Poor-1
2
3
15. What is your opinion of learning via the Internet? Negative-1
220
2
3
How to Design, Develop, and Deliver Successful E-Learning Initiatives
PART II. Learning Environment Compared to the traditional classroom format, the web-based delivery of the course should better enable you to: Strongly Disagree Be actively involved in the learning process
Strongly Agree
1
2
3
4
5
Address my questions and concerns
1
2
3
4
5
Voice my opinion and viewpoints
1
2
3
4
5
Understand the course materials
1
2
3
4
5
Stimulate my interest in the subject
1
2
3
4
5
Relate the subject matter to other areas
1
2
3
4
5
Put effort into non-assessed work
1
2
3
4
5
Control when and where to learn
1
2
3
4
5
Learn the materials in less time
1
2
3
4
5
Complete the assignments in less time
1
2
3
4
5
Use written communication in learning
1
2
3
4
5
221
How to Design, Develop, and Deliver Successful E-Learning Initiatives
Part III. Learning Experience The following have been valuable to you in your learning experience: Strongly Disagree
Strongly Agree
Seeing the professor
1
2
3
4
5
Hearing the professor
1
2
3
4
5
Understanding the professor
1
2
3
4
5
Obtaining feedback from the professor
1
2
3
4
5
Obtaining feedback from assessed work
1
2
3
4
5
Guiding by clear learning objectives
1
2
3
4
5
Guiding by detailed course outline
1
2
3
4
5
Asking questions while learning
1
2
3
4
5
Participating in student-student communications
1
2
3
4
5
Presenting thoughts to class
1
2
3
4
5
Presenting thoughts to professor
1
2
3
4
5
Understanding textbook
1
2
3
4
5
Understanding course material
1
2
3
4
5
Applying course material
1
2
3
4
5
Integrating course material
1
2
3
4
5
Learning definitions
1
2
3
4
5
Practicing problem solving
1
2
3
4
5
Using written communications
1
2
3
4
5
Completing assignments
1
2
3
4
5
Taking practice examinations
1
2
3
4
5
Reviewing course content
1
2
3
4
5
Studying in groups
1
2
3
4
5
Knowing current standing in class
1
2
3
4
5
Part IV. Expectations 1. What elements of the course will you expect to be most difficult to master on the web? 2. What suggestions do you have for overcoming these difficulties? 3. Do you have any other comments, questions, or feedback?
222
How to Design, Develop, and Deliver Successful E-Learning Initiatives
APPENDIX D. THE COURSE EVALUATION SURVEY
223
How to Design, Develop, and Deliver Successful E-Learning Initiatives
224
How to Design, Develop, and Deliver Successful E-Learning Initiatives
225
How to Design, Develop, and Deliver Successful E-Learning Initiatives
APPENDIX E. THE COURSE SATISFACTION SURVEY PART I. Student Information 1. What is your current GPA? <2.0
2.5
3.0
3.5
4.0
4
5 or more
2. How many courses are you taking this semester? 1
2
3
3. How many Internet-based courses have you taken prior to this course? 0
4. 5. 6. 7.
1
2
3
4 or more
Are you a business major? [ ]-Yes [ ]-No, my major is: _____________________ Are you working? [ ]-No [ ]-Yes, I am working as a: _______________ [ ] hours per week What is your gender? [ ]-Male [ ]-Female What is your student classification? Freshman
Sophomore
Junior
Senior
Graduate
8. What is your main reason for taking this course? [ ]-Required by University Studies
[ ]-Required by my major
[ ]-Other (e.g., elective)
9. How many hours per week have you been spending on this course? <1
2
3
10. Do you have any physical disability? [ ]-Yes [ ]-No
226
4-5
6-7
8 or more
How to Design, Develop, and Deliver Successful E-Learning Initiatives
11. What grade do you expect to earn in this course? E/Fail
D
C
B
A
12. What is your opinion towards distance-learning before taking this course? Negative-1
2
3
4
Positive-5
13. What is your opinion towards distance-learning after taking this course? Negative-1
2
3
4
Positive-5
14. What are the expectations you had for this course that are being met? 15. What are the expectations you had for this course that are not being met?
PART II. Learning Environment The following have been valuable to you in your learning experience: Strongly Disagree
Strongly Agree
1. PowerPoint slides
Not Used
1
2
3
4
5
2. Audio to accompany the slides
Not Used
1
2
3
4
5
3. Script to accompany the slides
Not Used
1
2
3
4
5
4. Discussion board questions
Not Used
1
2
3
4
5
5. Case studies
Not Used
1
2
3
4
5
6. Practice problems
Not Used
1
2
3
4
5
7. Excel tutorials
Not Used
1
2
3
4
5
8. Assignment problems
Not Used
1
2
3
4
5
9. Practice exam
Not Used
1
2
3
4
5
227
How to Design, Develop, and Deliver Successful E-Learning Initiatives
PART III. Evaluations Your evaluation of the course material of the course is: Strongly Disagree
Strongly Agree
1. Materials are well organized
1
2
3
4
5
2. Materials are effectively presented
1
2
3
4
5
3. Materials are the right length
1
2
3
4
5
4. Materials are clearly written
1
2
3
4
5
5. Materials are useful
1
2
3
4
5
6. Materials are relevant
1
2
3
4
5
7. Materials are up-to-date
1
2
3
4
5
Strongly Disagree
Strongly Agree
1. The system is easy-to-use
1
2
3
4
5
2. The system is user friendly
1
2
3
4
5
3. The systems is stable
1
2
3
4
5
4. The system is secure
1
2
3
4
5
5. The system is fast
1
2
3
4
5
6. The system is responsive
1
2
3
4
5
Strongly Disagree
Strongly Agree
1. The instructor is prompt
1
2
3
4
5
2. The instructor is responsive
1
2
3
4
5
3. The instructor is fair
1
2
3
4
5
4. The instructor is knowledgeable
1
2
3
4
5
5. The instructor is available
1
2
3
4
5
Your evaluation of the system quality of the course is: Your evaluation of the instructor quality of the course is: Your evaluation of the overall quality of the course is:
228
How to Design, Develop, and Deliver Successful E-Learning Initiatives
PART IV. Suggestions Strongly Disagree 1. You are satisfied with the course
1
Strongly Agree 2
3
4
5
2. You enjoyed the learning experience
1
2
3
4
5
3. You believe the system is successful
1
2
3
4
5
4. You will recommend the course to others
1
2
3
4
5
1. 2. 3. 4. 5. 6.
How could the web-based delivery of the course be improved? What do you like the most about the web-based format of the course? What do you like the least about the web-based format of the course? What elements of the subject have you found most difficult to master on the web? How could the instructor make these subjects more easily understandable on the web? Do you have any other comments, questions, or feedback?
229
Section 3
Factors Influencing Student Satisfaction and Learning Outcomes
231
Chapter 10
Quality Assurance in E-Learning Stacey McCroskey Online Adjunct Professor, USA Jamison V. Kovach University of Houston, USA Xin (David) Ding University of Houston, USA Susan Miertschin University of Houston, USA Sharon Lund O’Neil University of Houston, USA
ABSTRACT Quality is a subjective concept, and as such, there are many criteria for assuring quality, including assessment practices based on industry standards and accreditation requirements. Most assessments, including quality assurance in e-learning, frequently occur at three levels: individual course assessments, department or program assessments, and institutional assessments; frequently these levels cannot be distinctly delineated. While student evaluations are usually included within these frameworks, student views are but one variable in the quality assessment equation. To offer some plausible perspectives of how students view quality, this chapter will provide an overview of quality assurance for online learning from the course, program, and institutional viewpoints as well as review some of the key research related to students’ assessment of what constitutes quality in online courses.
INTRODUCTION Quality is a subjective concept and, therefore, is open to interpretation by the many stakeholders involved in higher education. These stakeholders include students, alumni, faculty, administrators, DOI: 10.4018/978-1-60960-615-2.ch010
parents, oversight boards, employers, state legislatures, local governing bodies, transfer institutions, and the public. Because of the diversity of stakeholders, Cleary (2001) suggests that “Each college or university, via its constituents, should determine what constitutes quality on its campus” (p. 20). Institutions can then identify suitable performance indicators to use in assessing goal
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
Quality Assurance in E-Learning
achievement (i.e., improving student learning outcomes). In education, there are many criteria for assuring quality. The most widely used criteria are based on industry practices and/or are described within academic accreditation standards. The guidance provided by both industry and accreditation standards has evolved over time; concurrently, e-learning has emerged as a strong and viable approach alongside traditional instruction. Thus, quality standards also have evolved to encompass principles and best practices for online education. Many of the current guidelines, regardless of their origins or applications, have been designed in much the same way as traditional quality management standards used in industry, such as the Malcolm Baldrige National Quality Award (MBNQA) (NIST, 2009a) and ISO 9000 (ISO, 2009b)—that is, to be non-prescriptive and adaptable. That is, these general guidelines provide a framework for building a quality management system, but do not specify how to fulfill or achieve the elements stipulated within the framework. In business, the elements of such frameworks are achieved through programs such as Total Quality Management (Ahire, Landeros, & Golhar, 1995), Lean (Shah & Ward, 2007) or Six Sigma (Schroeder, Linderman, Liedtke, & Choo, 2008). In education, these elements are fulfilled at the discretion of the faculty/instructors who have a professional obligation to continuously improve instruction (Goodson, Miertschin, Stewart, & Faulkenberry, 2009) and through a variety of instructional design and delivery monitoring processes at the program or course level. This approach for assuring quality is suited for the education environment where, with only general guidelines to follow, academicians still have considerable freedom to conduct their courses as they see fit. That is, instructors usually establish their own learning objectives for a given subject and determine how to assess and evaluate their courses to ensure and continuously improve educational quality (SACS, 2008).
232
CONTINUOUS QUALITY IMPROVEMENT Continuous quality improvement (CQI) in higher education, which began in the early 1990s, is based on the principles and practices of Total Quality Management (TQM). TQM has been widely used in the business community as a strategy to motivate the constant improvement of work processes to exceed customers’ expectations (Dean & Bowen, 1994). Within higher education, TQM has often been successfully applied to administrative operations (Montano & Utter, 1999). CQI, however, is the preferred term when referring to improving the design and administration of academic programs or courses because it emphasizes traditions with which scholars are already familiar – constantly striving for a higher goal (i.e., to seek out and implement best practices including new learning modalities, teaching methods, facilitation strategies, etc.). For example, when a course is scheduled to be offered more than once, many instructors update their syllabi and add new content to the course through revised activities or different reading assignments. Such actions encompass the basic notion of what it means to continuously improve quality in education, and CQI practices provide educators with a method for rethinking the way they approach teaching and learning activities in an effort to do their best to improve student learning outcomes (AAHE, 1994). Actions taken by individual faculty members to improve courses are often informal and undocumented; and thus, while extremely important, do not provide evidence of CQI that can be used by programs and institutions to assure various constituencies that their programs are both high quality and effective. Formal CQI initiatives help institutions set goals, identify necessary resources and strategies, and then measure progress towards fulfilling their ideal purpose (Moore, 2002). Formal CQI initiatives usually involve documentation of efforts at an institutional or
Quality Assurance in E-Learning
program level. Documentation often starts with the statement of goals followed by data collection activities that demonstrate support for goal achievement. Often formal CQI initiatives are tied to accreditation standards and activities. Accreditation processes require institutions and programs to provide evidence that supports many attributes of quality, including the acceptability of new, emerging, and innovative forms of instruction. Online instruction is one example. In terms of the effectiveness of online instruction, much of what we know about how to improve student learning outcomes in traditional classroom settings may be useful in online environments (ADEC, 2003; WCET, 2001). Assuring the quality of online instruction, however, is critical because direct contact with students is often limited and relies partially, if not solely, on the use of information and communication technologies (ICTs).
ACCREDITATION-DRIVEN ASSESSMENT PRACTICES Accreditation is one of the primary approaches used by institutions to demonstrate course and program quality. Current accreditation standards have incorporated ways to address the quality of online education and now address quality within online instruction at the course, program, and institutional level. The following discussion provides specific examples of factors considered important for online courses/programs by virtue of their inclusion in accreditation standards. In 2001, the eight higher education regional accrediting commissions collectively adopted and endorsed a set of “best practices” for distance education, which describe quality standards for electronically offered programming (Howell & Baker, 2006). These practices consist of 27 principles across 5 relevant institutional activities. The framework of institutional activities implies that online course quality requires: 1) institutional commitment, 2) measures to ensure well
structured curricula and effective instruction, 3) faculty support, 4) student support, and 5) measures to evaluate and assess online offerings. The 27 practices were established to help close a gap between emerging learning environments and regional accreditation standards for fulfilling elements of institutional quality and to facilitate self-assessment and evaluation of online instruction within each region (WCET, 2001). In a similar light, benchmarks for ensuring quality in online education were investigated by Phipps and Merisotis (2000) for the Institute of Higher Education Policy. Their work identified 24 benchmarks that span 7 areas of importance. The seven areas identified were institutional support, course development, teaching and learning, course structure, student support, faculty support, and evaluation and assessment. Notice the congruence between the seven areas of importance identified by these investigators and the accreditation framework of institutional activities that are considered important. These similarities indicate a common perception throughout the U.S. of what quality in online/distance education means (Parker, 2008). This supposition was confirmed by a U.S. Department of Education (2006) study which found that, despite the variation among standards and assessment techniques, accreditation reviewers demonstrate consistency about indicators of quality and the evaluation of online programs. Thus, we can see that accreditation guidelines for online learning share several common themes, including strong institutional commitment, adequate curriculum and instruction, sufficient faculty support, ample student support, and consistent learning outcome assessment (Wang, 2006). Furthermore, these general guidelines reflect the principles used by higher education institutions that have successfully adopted a systematic CQI approach (AQIP, 2008). Within the guidelines provided, the assessment of a curriculum or program is often carried out through accreditation or certification processes
233
Quality Assurance in E-Learning
that require mapping to a standard. Aligning assessment with standards is known as standardsbased assessment, which lends credibility to assessment processes (Brown & Mundrake, 2007). Whereas professionals recognize that assessment needs to be improvement/enhancement-based as opposed to accreditation-driven, it is also true that a threat of losing accreditation adds a sense of urgency to assessment (Hatfield & Gorman, 2000). Accreditation, after all, is the accepted standard of accountability for schools and programs, which is evidenced by the majority of K-12 schools in the U.S. being evaluated and accredited by six regional accrediting associations (Frank, 2004). Schools embracing online education must take measures to include online courses and programs in already established standards-based and/or accreditationdriven assessment processes.
Accreditation and Learning Quality Unfortunately, a universally accepted answer to the question “what is good learning?” eludes us still today, and this is true of both e-learning and traditional learning. As a result, the quality of education, both in-person and online, continues to be quite variable as evidenced by little to no significant consensus regarding effective educators’ teaching strategies and effective students’ learning strategies. In fact, evidence of quality improvement in higher education is often limited to documentation that is used mainly for accreditation purposes (Ehlers, 2009). It has been suggested that accreditation practices do more to raise costs than they do to improve the quality of the education that students receive (Leef & Burris, 2003). Also, there has been much attention surrounding higher education as funding sources are under increasing pressure to ensure that taxpayer dollars are being allocated appropriately (Cleary, 2001). The aim of accreditation practices is for institutions to perform self-evaluations and peer reviews of their current practices against the accepted educational standards for their region or
234
discipline. Further, the intent is that institutions will use what they learn from this examination to ensure and improve the quality of their academic programs (Eaton, 2003). These are, however, only aims and intentions. In many cases, nonsubstantive “improvement” work is done only to fulfill accreditation requirements; therefore, it has been alleged that accreditation practices do not, in fact, ensure academic quality (Leef, 2003). This discrepancy between practice and intent is unfortunate because it represents a lost opportunity for an institution to learn how to improve the quality of its educational programs. It is impossible to force all institutions to enthusiastically embrace the accreditation process as a mechanism for real quality improvement as opposed to viewing it as simply a mountain of required paperwork. The key is garnering buy-in from participants through collaboration and education as well as building the capability to translate the macro-level review process into activities and information that adds value to day-to-day teaching and learning processes. However, this translation is sometimes difficult (D’Andrea, 2007).
LEARNING ENHANCEMENT AND IMPROVEMENT Distance/online programs have typically been viewed as a way to expand access to higher education that is not site- or time-bound. Yet, many stakeholders “…stress the need to have a better understanding of what contributes to quality in on-line learning” (Myer, 2002, p. 1). Current accreditation practices focus mainly on ensuring that online classes are at least as good as face-to-face classes; hence, accreditation standards relative to online education are mostly assessment-, not enhancement-, based. If, however, quality learning is the primary instructional goal, then accreditation- and institutional-driven assessment practices need to become more enhancement-based.
Quality Assurance in E-Learning
Existing accreditation practices tend to focus on either assessment or enhancement. Assessment, which has historically dominated higher education, tends to emphasize quantitative measurement of outcomes and compliance with standards. Assessment results are sometimes then linked to negative sanctions with the intent of forcing institutions to take action to comply with standards. This assessment model often inspires instructors to focus mainly on accountability rather than improvement (Harvey, 1998, 2002). Frequently, this phenomenon has been observed in public K-12 education and is known as “teaching to the test”. Enhancement approaches, on the other hand, employ more qualitative evaluations that focus on important, less measurable aspects of higher education, which are often overlooked by traditional assessment models. Included in the list of less measurable factors are curriculum development processes, staff and faculty knowledge of pedagogical theory, and efforts to build communities of practice (D’Andrea, 2007). One example of an enhancement-based approach to quality improvement within higher education is the Higher Learning Commission’s Academic Quality Improvement Program (AQIP). The enhancement-based approach is reflected in the defined categories addressed within this program (AQIP, 2009), which include:
optimally. Hence, enhancement models such as the AQIP have the potential to engage staff and faculty in improvement efforts through the use of a formative feedback process that can directly influence teaching and learning activities (D’Andrea & Gosling, 2005; Gosling & D’Andrea, 2001).
1. Understanding students’and other stakeholders’ needs 2. Valuing people 3. Leading and communicating 4. Supporting institutional operations 5. Planning continuous improvement 6. Building collaborative relationships 7. Helping students learn 8. Accomplishing other distinctive objectives 9. Measuring effectiveness
The most widely recognized set of standards to measure the quality of instruction and course design within online programs is Quality MattersTM (QM, 2009). Quality Matters™ is an evaluation program that has been adopted by a growing number of institutions across the U.S. This program encompasses a set of standards to measure the quality of instruction and course design, which is based on best practices and instructional design research (Pollacia & McCallister, 2009). Through a faculty driven, peer review process, courses are assessed on 40 specific elements, which are distributed across eight broad standards including:
The AQIP model helps institutions examine if they are doing the right things to achieve their unique mission and if they are doing these things
Best Practices and Benchmarks in Quality Specific guidance concerning the quality of online education provided by accrediting agencies often involves “best practices” (Howell & Baker, 2006). If accreditation-driven practices, and, thus, institutional-driven practices, are to become more enhancement-based, there should be a greater focus on achieving quality programs through investment in the conduct of instructional design research and the implementation of best practices. Several models and frameworks that are being used by institutions to assess the quality of elearning are partially linked to accreditation-driven practices as well as to instructional design research. Some of these organization-developed models to evaluate online and blended/hybrid courses have become accepted standards, benchmarks, and best practices for ensuring quality.
Quality MattersTM
235
Quality Assurance in E-Learning
1. Course overview and introduction: the course should include “getting started” instructions, a statement of purpose, expectations of etiquette, instructor and student introductions, and stated prerequisites (i.e., student preparation and necessary technical skills) 2. Learning objectives: the course and each module have learning objectives that are measurable, aligned, appropriate, and clearly stated 3. Assessment and measurement: the assessment methods/instruments are appropriate and aligned with the learning objectives, course activities, and resources; the grading policy is clearly stated; criteria for evaluating student work is described; practice assignments are provided 4. Resources and materials: the instructional materials support the learning objectives and are appropriate for the course; the link between the instructional materials and the learning activities is explained to students and resources are cited 5. Learner engagement: the learning activities support achievement of the learning objectives and encourage instructor-student, content-student, and student-student interaction, as appropriate; requirements for student involvement as well as expectations of instructor responsiveness and availability are clearly stated 6. Course technology: the course effectively uses available tools and media to support the learning objectives and content delivery; tools and media promote student engagement; access instructions are adequate and easily understood; navigation is logical, consistent, and efficient 7. Learner support: the course instructions describe technical support offered and how the institution’s academic support system and student support services can help students
236
effectively use online resources and achieve their educational goals 8. Accessibility: the course conforms to the Americans with Disabilities Act (ADA, 1994) standards and institutional policies for online and hybrid courses; course pages are readable, contain meaningful links, and offer equivalent alternatives to audio and visual content (QM, 2009) Quality Matters™ encourages peer review on a continual basis to enhance the quality of online courses, offers institutional subscriptions and fee-based course reviews and training, and provides a basic course evaluation rubric free to non-subscribers (QM, 2006). Standards, like those provided by Quality Matters™, focus on course design and provide a checklist of the elements needed to build quality into online courses, thus ensuring the quality of instruction beginning the first day the course is offered (Little, 2009; Pollacia & McCallister, 2009). In addition, instructors can use these standards to revise or enhance existing courses – especially those that have been transferred directly from a traditional face-to-face setting to an online instructional format.
Institute for Higher Education Policy The Institute for Higher Education Policy (IHEP) is an independent, nonprofit organization that is dedicated to access and success in postsecondary education around the world (“About IHEP,”). In addition, IHEP uses unique research and innovative programs to inform key decision makers who shape public policy and support economic and social development. In 2000, IHEP published a list of benchmark criteria to use to determine if an e-learning program can be recognized as a quality program (Quality on the line: Benchmarks for success in Internet-based distance education, 2000). They proposed 24 benchmarks for measuring the quality of Internet-based learning. The researchers grouped these benchmarks into seven categories:
Quality Assurance in E-Learning
1. 2. 3. 4. 5. 6. 7.
Institutional support Course development Teaching/learning Course structure Student support Faculty support Evaluation and assessment
NACOL National Standards for Quality Online Teaching National Standards for Quality Online Teaching is designed to provide states, districts, online programs, and other organizations with a set of quality guidelines for online teaching and instructional design. The 13 standards include a comprehensive set of criteria that can be adopted by teachers, schools, and parents across the nation to evaluate online teaching quality and implement best practices. Through a detailed literature review of existing standards related to online teaching and a cross-reference of standards followed by a research survey of NACOL members and experts, 13 standards were identified: 1. The teacher meets the professional teaching standards established by a state-licensing agency or the teacher has academic credentials in the field in which he or she is teaching. 2. The teacher has the prerequisite technology skills to teach online. 3. The teacher plans, designs, and incorporates strategies to encourage active learning, interaction, participation, and collaboration in the online environment. 4. The teacher provides online leadership in a manner that promotes student success through regular feedback, prompt response, and clear expectations. 5. The teacher models, guides, and encourages legal, ethical, safe, and healthy behavior related to technology use.
6. The teacher has experienced online learning from the perspective of a student. 7. The teacher understands and is responsive to students with special needs in the online classroom. 8. The teacher demonstrates competencies in creating and implementing assessments in online learning environments in ways that assure validity and reliability of instruments and procedures. 9. The teacher develops and delivers assessments, projects, and assignments that meet standards-based learning goals and assesses learning progress by measuring student achievement of learning goals. 10. The teacher demonstrates competencies in using data and findings from assessments and other data sources to modify instructional methods and content and to guide student learning. 11. The teacher demonstrates frequent and effective strategies that enable both teacher and students to complete self- and pre-assessments. 12. The teacher collaborates with colleagues. 13. The teacher arranges media and content to help students and teachers transfer knowledge most effectively in the online environment.
Industry Influences on Educational Standards Unfortunately, the blind importation of readymade quality assessment processes from business and industry have often implied a lack of confidence in academic professional judgment and the peer review processes that traditionally have been in place within higher education. In addition, the use of these types of review processes has been driven mainly by external pressure to ensure accountability regarding public funding to support higher education (D’Andrea, 2007). To address some of these issues, several industry standards
237
Quality Assurance in E-Learning
have been translated specifically for use within educational settings.
E-Learning Maturity Model The E-Learning Maturity Model (eMM) (Marshall & Mitchell, 2007) provides a quality improvement framework by which institutions can assess and compare their capability to sustainably develop, deploy, and support e-learning. The eMM is based on models initially developed for the software development industry, namely, the Capability Maturity Model (CMM) (Paulk, Curtis, Chrissis, & Weber, 1993) and the SPICE project, a major international initiative to support the International Standard ISO/IEC 15504 for (Software) Process Assessment (El Emam, Drouin, & Melo, 1998). The CMM model was abstracted derived from actual data as opposed to theory. The fundamental idea of CMM and SPICE is that the effectiveness of an institution in a particular area of work is dependent on its capability to engage in high quality processes that are reproducible, that can be sustained, and that can be expanded. In particular, research (SECAT, 1998) has shown that the CMM model assists organizations in addressing the following issues: 1. Is the organization successful at learning from past mistakes? 2. Is it clear that the organization is spending limited resources effectively? 3. Does everyone agree which problems within the organization are the highest priorities? 4. Does the organization have a clear picture of how it will improve its processes? The eMM quality framework, based on CMM and SPICE concepts, focuses on the process nature of online education. At the educational institution level, the emphasis of the eMM model is on guiding improvements in e-learning so that it moves from the realm of an ad-hoc process based on individual initiative to a mainstream, integrated,
238
fundamental process embraced by the institution as a valuable addition to the educational processes it already provides. A key to achieving this level of integration is demonstrating that e-learning delivers demonstrable improvements in areas like student learning. Along the continuum, the framework of the eMM defines the following levels of capability with respect to an institution’s e-learning initiatives. 1. Level 1: Initial level with no formal process, where institutions are characterized by an ad-hoc approach to e-learning. 2. Level 2: Planned level with deliberate process, where institutions have adopted a more planned approach to e-learning. 3. Level 3: Defined level with structured and integrated process, where institutions have begun to integrate e-learning issues into university teaching, and learning or strategic plans, often developing an e-learning vision. 4. Level 4: Managed level with an organizational approach, where institutions have developed useful criteria for evaluating e-learning in terms of improved student outcomes rather than just perceptions. 5. Level 5: Optimized level with continual improvement of educational effectiveness, where institutions have developed a program for regularly auditing the educational effectiveness of e-learning initiatives.
Sloan-C Quality Framework The purpose of the Sloan Consortium (Sloan-C) is to help learning organizations continually improve the quality, scale, and breadth of their offerings according to their own distinctive missions (Moore, 2005). To help establish specific standards for quality in online education, the Sloan-C emphasizes five areas of attention, which it calls the “five pillars” of quality in online education: 1) learning effectiveness, 2) cost effectiveness and
Quality Assurance in E-Learning
institutional commitment, 3) access, 4) faculty satisfaction, and 5) student satisfaction. These pillars encompass components that are part of an overall picture of the quality of education that students receive. The intent of the Sloan-C framework is to allow each organization to develop its own standards within each pillar of quality. For example, this means that each organization would determine their own indicators of student satisfaction that complement their mission, what and how to take measurements that inform each indicator, and the acceptable values for each measure. Then, the organization would institute processes that would provide systematic measurement, followed by review, which would spawn improvement projects. Sloan-C also encourages institutions to develop online learning programs that are at least as effective as other learning modalities, which implies the need for routine and systematic comparison of measures across formats.
Other Industry-Driven Frameworks Several additional frameworks for assuring quality in education have been established based on industry-driven practices. Some institutions have successfully improved the overall quality of the service they provide by implementing the MBNAQ education criteria for performance excellence framework (Dew & Nearing, 2004; Goldberg & Cole, 2002). The MBNAQ framework consists of seven categories: 1. 2. 3. 4.
Leadership Strategic planning Customer focus Measurement, analysis, and knowledge management 5. Workforce focus 6. Process management 7. Results
These categories are intended to help institutions improve student achievement, communication, productivity, and effectiveness as well as achieve strategic goals (NIST, 2009b). The criteria specified in this framework are based on the guidelines originally developed for assuring quality in business/nonprofit organizations (NIST, 2009a), which have been transcribed to fit the needs of educational settings. Although this framework does not specifically address online education, the general principles articulated within this framework can be applied to improve aspects of all types of learning environments. Another quality management system whose application originated in industry and has now been translated for use within education is the ISO 9000 family of standards which, along with ISO 14001:2004 (i.e., an environmental management system), has been implemented by more than one million organizations in 175 countries (ISO, 2009a). Many educational institutions in other countries, and more recently in the U.S., have received ISO 9001:2000 certification for their instructional quality system (ASQ, 2000). ASQ Z1.11-2002 is the guidance standard specific to education and training institutions, which helps organizations fulfill the requirements of ANSI/ ISO/ASQ Q9001-2001 (i.e., the U.S. equivalent of ISO 9001:2000). This standard has five clauses: 1) quality management system guidelines, 2) management responsibility, 3) resource management, 4) product (or service) realization, and 5) measurement, analysis, and improvement. There are several similarities between this quality management system and the MBNQA framework. For example, both emphasize leadership/management responsibility and measurement as well as analysis. Furthermore, like the MBNQA, online education is not specifically addressed in ASQ Z1.11-2002; however, these general principles also can be used to assure quality in a wide variety of educational settings.
239
Quality Assurance in E-Learning
STUDENT PERCEPTIONS OF COURSE QUALITY
that has addressed the quality of the e-learning experience from a student’s perspective.
Frameworks for quality assurance of e-learning are useful for establishing, implementing, and maintaining quality assessment processes in support of continuous improvement, accreditation, and benchmarking. With the frameworks structuring the effort, these quality processes are usually undertaken formally at an institutional or program level. In addition, individual faculty members may use informal processes for improving the quality of the courses. Such informal processes include instructors’ content knowledge and personal effort including teaching skills and methods. The question remains as to whether or not formal and informal improvement initiatives based on frameworks (or otherwise) help create quality in e-learning from a student’s perspective. Do e-learning quality frameworks, in fact, reflect the desires and needs of students? This question should be readily answered by locating the empirical and theoretical studies that formed the creation of each framework element, but such a quest turns out to be difficult rather than simple. The frameworks were built partly on empirical evidence, partly on theoretical postulating, and partly on the basis of experience and informal observation by dedicated expert educators (Inglis, 2008). In addition, the frameworks attempt to capture something extremely complex and multi-faceted, namely, the quality of the learning experience. Quality of traditional learning experiences is not so well defined that there exists a single prescription for the perfect course/teacher/ learner/subject matter combination; and online learning experiences are no less complex. To assure quality and consumer satisfaction, institutions and their faculty must pay close attention not only to frameworks for assuring quality in e-learning, but also to their students’ perceptions of and satisfaction with their online course offerings and programs (Young & Norgard, 2006). The following section examines some of the research
Factors Influencing Student Perceptions
240
Because most students evaluate the quality of the course based on their perceptions of the course, several factors that affect a student’s perception of quality have been identified in the research. These factors include course design, strength of the online learning community, timely interaction between learners and instructors, realistic and achievable outcomes, adequate and easy instructions on how to meet the course outcomes, and fairness of exams and grading. Each of these factors will be considered in the following paragraphs.
Course Design Course design is a broad term that encompasses the characteristics of organization, accessibility, structure, and pedagogy. It also encompasses processes by which online communications and interaction are integrated into the class structure (Reisetter & Boris, 2004). Important aspects of course organization presented in the literature are organizing the course around goals; organizing for student-centeredness; organizing for flexibility in terms of pace, activities, and time commitment; organizing for timely feedback on assignments and assessments; unambiguous statements of expectations; and clear procedures (Reisetter & Boris, 2004). Students become frustrated in e-learning courses when they are poorly designed (Yang & Cornelius, 2004). This frustration can lead to poor learning outcomes. A well-designed course can improve students’ use of the different strategies and assignments within the virtual classroom. In addition, a study by Young and Norgard (2006) found that students preferred consistent design across courses to support ease of navigation. A study by Nath and Ralston-Berg (2008) found
Quality Assurance in E-Learning
that students place a high value on materials being well organized.
Online Learning Communities Traditional classrooms were once professor-centered, with the professor disseminating knowledge, and the students returning identical knowledge back to the professor through exams and assignments. More contemporary classrooms use active learning strategies to get students involved with changing, processing, and restructuring knowledge. Successful online learning environments are often described as learning communities in which the professor is a member of the community along with the students. An online learning community is where a group of learners, unified by a common cause and empowered by a supportive virtual environment, engage in collaborative learning within an atmosphere of trust and commitment (Ke & Hoadley, 2009). The role of the professor in the community is to select and filter information for the student to use, to provide thought-provoking questions and tasks, to facilitate thoughtful discussion, and to coach, counsel, encourage, and mentor student groups as they collaborate to learn, each student developing his or her own personal understanding of course material (Yang & Cornelious, 2005). Some literature labels an online community as “constructivist” in approach, where students actively construct personal meaning from interaction and activity (Navarro & Shoemaker, 2000; Reisetter & Boris, 2004). The participation of an instructor is key to developing a feeling of connectivity within an online learning community. The development of these communities in the online classroom has been associated with higher levels of student satisfaction and greater student learning (Arbaugh, 2002; Brodke & Mruk, 2009; Jung, Choi, Lim, & Leem, 2002; Kanuka & Anderson, 1998; Palloff & Pratt, 2007; Xiaojing, Magjuka, Bonk, & Seung-hee, 2007) and course retention
(Liu, Gomez, & Cherng-Jyh, 2009). Brodke and Mruk (2009) found that a cultivation of a highly interactive online learning community promoted student satisfaction and learning. Kanuka and Anderson (1998) noted that this social interaction between learners and instructor could contribute to learner satisfaction and frequency of interaction in the online course environment. Young and Norgard (2006) also determined that developing a strong online community with student-to-student interaction was important to student satisfaction.
Interaction Between Learners and Instructors Studies that establish importance of interactions in learning processes predate online learning. An early framework for studying interaction in distance education identified three types of critical interactions: learner-instructor, learner-learner, and learner-content. Studies that used the framework demonstrated learner-learner interactions to be primarily motivational, but also observed value from these interactions in terms of the development of social skills that are important for workplace performance. A role of interactions documented in some studies is in promoting higher order thinking and learning skills. One study linked student dissatisfaction with insufficient opportunities for learner-learner interaction (Navarro & Shoemaker, 2000). Perreault, Waldman, and Alexander (2002) provide evidence that some learners consider learner-learner and learner-instructor interactions to be critical to their success, describing their dissatisfaction as “a feeling of isolation” and “a lack of connectedness”. Instructors teaching online courses need to use a greater range of communication technologies than those teaching face-to-face courses. One of the primary methods that students use to contact an instructor about issues not appropriate to a discussion board is through email. The online instructor needs to check email at least daily, if not more often, to be effective (Easton,
241
Quality Assurance in E-Learning
2003; Smith, Ferguson, & Caris, 2003). Student perception about the timely response to questions by instructors has been found to be a significant predictor of learner satisfaction (Soon, Sook, Jung, & Im, 2000; Thurmond, Wambach, Connors, & Frey, 2002). Young and Norgard (2006) found that timely interaction between learners and instructors was an important component to developing a strong online community. The students in their study felt that when instructors did not respond in a timely manner, students felt isolated and unsure if their efforts were correct. Yang and Cornelius (2004) found a similar result in their study.
Other Criteria of Importance to Student Perceptions of Quality Using the Quality MattersTM rubric, Nath and Ralston-Berg (2008) conducted a study in which they queried online students as to the criteria for quality courses. They found that the top criterion for students evaluating the quality of the course was an easy to understand grading system. Other criteria that they found to be of importance included 1) outcomes that were realistic and achievable, 2) instructions on how to meet the course outcomes that were adequate and easy to follow, 3) assignments that were appropriate for online learning, 4) materials that were easily accessible, and 5) outcomes that were realistic and achievable.
FUTURE TRENDS: CULTURE FOR QUALITY Unlike traditional learning assessment, assessment of e-learning appears to remain in its infancy, theoretically and empirically. Due to the varying views of multiple stakeholders in the e-learning environment, it is critical to use relevant standards, accreditation processes, and best practices to guide the instructional design at institutional and program levels. At the course level, instructors should create a student-centered learning environment
242
that facilitates greater self-directed autonomous learning behavior through peer interaction, learner engagement, and a supportive teaching presence. Although standards and accreditation might help enhance the quality of e-learning, higher education systems should not work like a factory. According to “Why teaching is ‘not like making motorcars’” (Sutter, 2010), mass production and conformity actually prevents students from finding their passions and succeeding. While traditional manufacturing theory proposes that productivity and quality can be improved through standardization, the learning experience in a higher education system should be creative, heterogeneous, and unique. Therefore, instead of relying on past practices and premises, educators and administrators need to create new ways of developing and assessing the learning outcomes in the e-learning environment. For example, instead of the largely assessmentbased approach to quality used in higher education today, a system needs that focuses on “…change more than control, development rather than assurance, and innovation more than compliance” (Ehlers, 2009, p. 343) needs to be developed. A “change-development-innovation” type of approach would support building a culture around quality improvement within higher education. Given the widespread use of e-learning, this new approach also should focus more attention on ensuring the quality of online courses/programs. While we would like accrediting agencies to provide us with a step-by-step prescription for assessing quality in online education, the number of variables involved makes such a model virtually impossible. We can suggest, however, that the first step involves generally changing current quality monitoring practices to move away from “…a `beast-like’ presence requiring to be `fed’ with ritualistic practices by academics seeking to meet accountability requirements” (Newton, 2000, p. 153). Institutions and external quality bodies should focus on the critical elements that drive efforts to improve the quality of day-to-day
Quality Assurance in E-Learning
teaching and learning activities. Unfortunately, cultural change within any organization is a difficult process that requires specific long-term efforts focused on the achievement of strategic goals – one of which should be true quality improvement, not just documentation for the sake of accreditation (Ehlers, 2009).
ASQ. (2000). Quality assurance standards guidelines for the application of ANSI/ISO/ASQ Q9001-2000 to education and training institutions. Milwaukee, WI: American Society for Quality.
REFERENCES
Brown, B. J., & Mundrake, G. A. (2007). Proof of student achievement: Assessment for an evolving business education curriculum. 2007 NBEA Yearbook, 45, 130-145.
AAHE. (1994). CQI 101: A first reader for higher education. Washington, DC: American Association for Higher Education. About, I. H. E. P. (2010). Retrieved March 10, 2010, from http://www.ihep.org/ About/ aboutIHEP.cfm ADA. (1994). Retrieved December 22, 2009, from http://www.ada.gov/ adastd94.pdf ADEC. (2003). American Distance Education Consortium guiding principles for distance teaching and learning. Retrieved October 26, 2009, from http://www.adec.edu/ admin/ papers/ distance-teaching_principles.html Ahire, S. L., Landeros, R., & Golhar, D. Y. (1995). Total quality management: A review and an agenda for future research. Production and Operations Management, 4(3), 227–307.
Brodke, M. H., & Mruk, C. J. (2009). Crucial components of online teaching success: A review and illustrative case study. AURCO Journal, 15, 187–205.
Cleary, T. S. (2001). Indicators of quality. Planning for Higher Education, 29(3), 19–28. D’Andrea, V. (2007). Improving teaching and learning in higher education: Can learning theory add value to quality reviews? In Westerheijden, D. F., Stensaker, B., & Rosa, M. J. (Eds.), Quality assurance in higher education (pp. 209–223). The Netherlands: Springer. doi:10.1007/978-1-40206012-0_8 D’Andrea, V., & Gosling, D. (2005). Improving teaching and learning in higher education: A whole institution approach. Society for Research into Higher Education & Open University Press.
AQIP. (2008). Principles and categories for improving academic quality. Retrieved December 3, 2009, from http://www.aqip.org/
Dean, J. W., & Bowen, D. E. (1994). Management theory and total quality: Improving research and practice through theory development. Academy of Management Review, 19(3), 392–418. doi:10.2307/258933
AQIP. (2009). Principles and categories for improving academic quality. Retrieved December 3, 2009, from http://www.aqip.org/
Dew, J. R., & Nearing, M. M. (2004). Continuous quality improvement in higher education. Westport, CT: Praeger Publishers.
Arbaugh, J. B. (2002). Managing the online classroom: A study of technological and behavioral characteristics of Web-based MBA courses. The Journal of High Technology Management Research, 13, 203–223. doi:10.1016/S10478310(02)00049-4
Easton, S. S. (2003). Clarifying the instructor’s role in online distance learning. Communication Education, 52(2), 87. doi:10.1080/03634520302470 Eaton, J. S. (2003). Before you bash accreditation, consider the alternatives. The Chronicle of Higher Education, 49(25), B15.
243
Quality Assurance in E-Learning
Ehlers, U. (2009). Understanding quality culture. Quality Assurance in Education, 17(4), 343–363. doi:10.1108/09684880910992322 El Emam, K., Drouin, J.-N., & Melo, W. (1998). SPICE: The theory and practice of software process improvement and capability determination. California: IEEE Computer Society. Frank, T. (2004). Making the grade keeps getting harder. Retrieved October 29, 2009, from http:// www.csmonitor.com/ 2004/ 0113/ p11s01-legn. htm Goldberg, J. S., & Cole, B. R. (2002). Quality management in education: Building excellence and equity in student performance. Quality Management Journal, 9(4), 8–22. Goodson, C. E., Miertschin, S. L., Stewart, B., & Faulkenberry, L. (2009). Online distance education and student learning: Do they measure up? Paper presented at the Annual Conference of the American Society of Engineering Education, Austin, TX. Gosling, D., & D’Andrea, V. (2001). Quality development: A new concept for higher education. Quality in Higher Education, 7(1), 7–17. doi:10.1080/13538320120045049 Harvey, L. (1998). An assessment of past and current approaches to quality in higher education. Australian Journal of Education, 42(3), 237–238. Harvey, L. (2002). Evaluation for what? Teaching in Higher Education, 7(3), 245–263. doi:10.1080/13562510220144761 Hatfield, S. R., & Gorman, K. L. (2000). Assessment in education--the past, present, and future. Assessment in Business Education - 2000 NBEA Yearbook, 38, 1-10. Howell, S., & Baker, K. (2006). Good (best) practices for electronically offered degree and certificate programs: A 10-year retrospect. Distance Learning, 3(1), 41–47.
244
IHEP. (2000). Quality on the line: Benchmarks for success in Internet-based distance education. Washington, DC: Institute for Higher Education Policy. Inglis, A. (2008). Approaches to the validation of quality frameworks for e-learning. Quality Assurance in Education, 16(4), 347–362. doi:10.1108/09684880810906490 ISO. (2009a). ISO 9000 and ISO 14000. Retrieved December 17, 2009, from http://www.iso. org/ iso/ iso_catalogue/ management_standards/ iso_9000_iso_14000.htm ISO. (2009b). ISO 9000 esentials. Retrieved October 30, 2009, from http://www.iso.org/ iso/ iso_catalogue/ management_standards/ iso_9000_iso_14000/ iso_9000_essentials.htm Jung, I., Choi, S., Lim, C., & Leem, J. (2002). Effects of different types of interaction on learning achievement, satisfaction and participation in Web-based instruction. Innovations in Education and Teaching International, 39(2), 153–162. doi:10.1080/14703290252934603 Kanuka, H., & Anderson, T. (1998). Online social interchange, discord, and knowledge construction. Journal of Distance Education, 13(1), 57–74. Ke, F., & Hoadley, C. (2009). Evaluating online learning communities. Educational Technology Research and Development, 57(4), 487–510. doi:10.1007/s11423-009-9120-2 Leef, G. C. (2003). Accreditation is no guarantee of academic quality. The Chronicle of Higher Education, 49(30), B17. Leef, G. C., & Burris, R. D. (2003). Can college accreditation live up to its promises?Washington, DC: American Council of Trustees and Alumni. Little, B. B. (2009). Quality assurance for online nursing courses. The Journal of Nursing Education, 48(7), 381–387. doi:10.3928/0148483420090615-05
Quality Assurance in E-Learning
Liu, S. Y., Gomez, J., & Cherng-Jyh, Y. (2009). Community college online course retention and final grade: Predictability of social presence. Journal of Interactive Online Learning, 8(2), 165–182.
Palloff, R. M., & Pratt, K. (2007). Building online learning communities: Effective strategies for the virtual classroom (2nd ed.). San Francisco, CA: Jossey-Bass.
Marshall, S., & Mitchell, G. (2007). Benchmarking for quality improvement: The e-learning maturity model. Paper presented at the ascilite Singapore 2007.
Parker, N. K. (2008). The quality dilemma in online education revisited. In Anderson, T. (Ed.), The theory and practice of online learning (pp. 305–342). Edmonton, AB: AU Press, Athabasca University.
Montano, C. B., & Utter, G. H. (1999). Total quality management in higher education. Quality Progress, 32(8), 52–59. Moore, J. C. (2002). Elements of quality: The Sloan-C framework. Needham, MA: The Sloan Consortium. Moore, J. C. (2005). The Sloan Consortium quality framework and the five pillars. Needham, MA: The Sloan Consortium. Myer, K. A. (2002). Quality in distance education: Focus on online learning. ASHE-ERIC Higher Education Report, 29(4), 1–121. Nath, L., & Ralston-Berg, P. (2008). Why “quality matters” matters: What students value. Paper presented at the American Sociological Association 2008 Annual Conference. Navarro, P., & Shoemaker, J. (2000). Performance and perceptions of distance learners in cyberspace. American Journal of Distance Education, 14(2), 15–35. doi:10.1080/08923640009527052 Newton, J. (2000). Feeding the beast or improving quality? Academics’ perceptions of quality assurance and quality monitoring. Quality in Higher Education, 6(2), 153–163. doi:10.1080/713692740 NIST. (2009). Malcolm Baldrige National Quality Award: 2009-10 criteria for performance excellence. Gaithersburg, MD: National Institute of Standards and Technology of the United States Department of Commerce.
Paulk, M. C., Curtis, B., Chrissis, M. B., & Weber, C. V. (1993). Capability maturity model, version 1.1. IEEE Software, 10(4), 18–27. doi:10.1109/52.219617 Perreault, H., Waldman, L., & Alexander, M. (2002). Overcoming barriers to successful delivery of distance-learning courses. Journal of Education for Business, 77(6), 313. doi:10.1080/08832320209599681 Phipps, R. A., & Merisotis, J. P. (2000). Quality on the line: Benchmarks for success in internetbased distance education. Retrieved October 26, 2009, from http://www.ihep.org/ Publications/ publications-detail.cfm?id=69 Pollacia, L., & McCallister, T. (2009). Using Web 2.0 technologies to meet quality matters (QM) requirements. Journal of Information Systems Education, 20(2), 155–164. QM. (2006). Welcome to Quality Matters. Retrieved October 26, 2009, from http://www. qualitymatters.org/ QM. (2009). Quality Matters rubric standards 2008-2010 edition. Retrieved October 26, 2009, from http://qminstitute.org/ home/ Public%20Library/ About%20QM/ RubricStandards2008-2010.pdf Reisetter, M., & Boris, G. (2004). What works. Quarterly Review of Distance Education, 5(4), 277–291.
245
Quality Assurance in E-Learning
SACS. (2008). The principles of accreditation: Foundations for quality enhancement. Retrieved December 3, 2009, from http://www.sacscoc.org/ pdf/ 2008PrinciplesofAccreditation.pdf Schroeder, R. G., Linderman, K., Liedtke, C., & Choo, A. S. (2008). Six Sigma: Definition and underlying theory. Journal of Operations Management, 26(4), 536–554. doi:10.1016/j. jom.2007.06.007 SECAT. (1998). Why would you want to use a capability maturity model. Systems Engineering Capability Assessment & Training. Shah, R., & Ward, P. T. (2007). Defining and developing measures of lean production. Journal of Operations Management, 25(4), 785–805. doi:10.1016/j.jom.2007.01.019 Smith, G. G., Ferguson, D., & Caris, M. (2003). The Web versus the classroom: Instructor experiences in discussion-based and mathematics-based disciplines. Journal of Educational Computing Research, 29(1), 29–59. doi:10.2190/PEA0T6N4-PU8D-CFUF Soon, K. H., Sook, K. I., Jung, C. W., & Im, K. M. (2000). The effects of Internet-based distance learning in nursing. Computers in Nursing, 18(1), 19–25. Sutter, J. D. (Producer). (2010, March 17) Why teaching is not like making motorcars. CNN Opinion. Retrieved from http://www.cnn.com/ 2010/ OPINION/ 03/ 17/ ted.ken.robinson/ index.html Thurmond, V. A., Wambach, K., Connors, H. R., & Frey, B. B. (2002). Evaluation of student satisfaction: Determining the impact of a Web-based environment by controlling for student characteristics. American Journal of Distance Education, 16(3), 169–189. doi:10.1207/S15389286AJDE1603_4
246
USDOE. (2006). Evidence of quality in distance education program drawn from interviews with the accreditation community. Retrieved October 26, 2009, from http://www.ysu.edu/ accreditation/ Resources/ Accreditation-Evidence-of-Qualityin-DE-Programs.pdf Wang, Q. (2006). Quality assurance - best practices for assessing online programs. International Journal on E-Learning, 5(2), 265. WCET. (2001). Best practices for electronically offered degree and certificate programs. Retrieved October 26, 2009, from http://wcet.info/ resources/ accreditation/ Accrediting%20-%20 Best%20Practices.pdf Xiaojing, L., Magjuka, R. J., Bonk, C. J., & Seung-hee, L. (2007). Does sense of community matter? Quarterly Review of Distance Education, 8(1), 9–24. Yang, Y., & Cornelious, L. F. (2005). Preparing instructors for quality online instruction. Online Journal of Distance Learning Administration, 8(1). Yang, Y., & Cornelius, L. F. (2004). Students’ perceptions towards the quality of online education: A qualitative approach. Paper presented at the Association for Educational Communications and Technology 27th Conference. Young, A., & Norgard, C. (2006). Assessing the quality of online courses from the students’ perspective. The Internet and Higher Education, 9(2), 107–115. doi:10.1016/j.iheduc.2006.03.001
ADDITIONAL READING Allen, I. E., & Seaman, J. (2008). Staying the Course: Online Education in the United States, 2008. Needham, MA: Sloan Consortium.
Quality Assurance in E-Learning
Allen, M., Mabry, E., Mattrey, M., Bourhis, J., Titsworth, S., & Burrell, N. (2004). Evaluating the effectiveness of distance learning: a comparison using meta analysis. Journal of Communication. Birnbaum, R. (2000). Management Fads in Higher Education: Where They Come From, What They Do, and Why They Fail. San Francisco: Jossey-Bass. Bouge, E. G., & Hall, K. B. (2003). Quality and Accountability in Higher Education. Westport, CT: Praeger. Buzzetto-More, N. A., & Alade, A. J. (2006). Best practices in e-assessment. Journal of Information Technology Education, 5, 251–269. Chaney, B. H., Eddy, J. M., Dorman, S. M., Glessner, L., Green, B. L., & Lara-Alecio, R. (2007). Development of an Instrument to Assess Student Opinions of the Quality of Distance Education Courses. American Journal of Distance Education, 21(3), 145–164. doi:10.1080/08923640701341679 Dunn, L. C., M., O’Reilly, M., & Parry, S. (2004). The Student Assessment Handbook. London: RoutledgeFalmer. Ellis, R. A., Goodyear, P., Prosser, M., & O’Hara, A. (2006). How and what university students learn through online and face-to-face discussion: conceptions, intentions and approaches. Journal of Computer Assisted Learning, 22(4), 244–256. doi:10.1111/j.1365-2729.2006.00173.x Eom, S. B., Wen, H. J., & Ashill, N. (2006). The determinants of students’ perceived learning outcomes and satisfaction in university online education: An empirical investigation. Decision Sciences Journal of Innovative Education, 4(2), 215–235. doi:10.1111/j.1540-4609.2006.00114.x Gaytan, J., & McEwen, B. C. (2007). Effective online instructional and assessment strategies. American Journal of Distance Education, 21(3), 117–132. doi:10.1080/08923640701341653
Means, B., Toyama, Y., Murphy, R., Bakia, M., & Jones, K. (2009). Evaluation of Evidence-Based Practices in Online Learning: A Meta-Analysis and Review of Online Learning Studies (No. ED04-CO-0040). Washington, D. C.: U.S. Department of Education, Office of Planning, Evaluation, and Policy Development. Paechter, M., Maier, B., & Macher, D. (2010). Students’ expectations of, and experiences in elearning: Their relation to learning achievements and course satisfaction. Computers & Education, 54, 222–229. doi:10.1016/j.compedu.2009.08.005 Park, J.-H., & Hee Jun, C. (2009). Factors influencing adult learners’ decision to drop out or persist in online learning. [Article]. Journal of Educational Technology & Society, 12(4), 207–217. Pollock, J. E. (2007). Improving Student Learning One Teacher at a Time. Alexandria, VA: Association for Supervision and Curriculum Development. Prosser, M., & Trigwell, K. (1999). Understanding Learning and Teaching: The Experience in Higher Education. Philadelphia, PA: The Open Society for Research into Higher Education & Open University Press. Rocco, S. (2007). Online assessment and evaluation. New Directions for Adult and Continuing Education, (113): 75–86. doi:10.1002/ace.249 Rosen, A. (2009). E-learning 2.0: Proven Practices and Emerging Technologies to Achieve Results. New York: American Management Association. Shea, M., Murray, R., & Harlin, R. (2005). Drowning in Data? How to Collect, Organize and Document Student Performance. Portsmouth, NH: Heinemann. Stephenson, J. (2001). Teaching and Learning Online: Pedagogies for New Technologies. London: Kogan Page.
247
Quality Assurance in E-Learning
KEY TERMS AND DEFINITIONS Assessment: Activities used to determine the progress of students in meeting learning objectives. Continuous Quality Improvement (CQI): Improving the design and administration of academic programs or courses by focusing on continuous improvement.
248
Quality: A subjective concept that is open to interpretation by the many stakeholders involved in higher education. Quality Matters™: An evaluation program that encompasses a set of standards to measure the quality of instruction and course design through a faculty driven, peer review process. Courses are assessed on 40 specific elements, which are distributed across eight broad standards.
249
Chapter 11
Measuring Success in a Synchronous Virtual Classroom Florence Martin University of North Carolina Wilmington, USA Michele A. Parker University of North Carolina Wilmington, USA Abdou Ndoye University of North Carolina Wilmington, USA
ABSTRACT This chapter will benefit those who teach individuals using the synchronous virtual classroom (SVC). The SVC model will help instructors design online courses that incorporate the factors that students need to be successful. This model will also help virtual classroom instructors and managers develop a systematic way of identifying and addressing the external and internal factors that might impact the success of their instruction. The strategies for empirically researching the SVC, which range from qualitative inquiry to experimental design, are discussed along with practical examples. This information will benefit instructors, researchers, non-profit and profit organizations, and academia.
INTRODUCTION In the recent decade technology has significantly enhanced education and online courses are increasing in popularity and credibility. In 2008, the Sloan consortium reported that 3.9 million (over 20%) students in the U.S., were taking at least one online course. In just one year, from 2006 to 2007, there was a 12.9% increase in online enrollment (Allen & Seaman, 2008); an increase of 400,000
students. The reason for this growth is that online courses offer “anytime,” “anywhere” learning which provides flexibility and convenience for students and instructors. However one of the major challenges that distance educators still face in designing effective online courses is including interactivity (Muirhead, 2004; Keefe, 2003). One of the ways this challenge has been addressed is through the use of synchronous virtual classroom technology.
DOI: 10.4018/978-1-60960-615-2.ch011
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
Measuring Success in a Synchronous Virtual Classroom
Synchronous Virtual Classrooms Synchronous Virtual classrooms are online environments that enable students and instructors to communicate synchronously using text chat, audio, and video. They enable faculty and students to interact as if they were face-to-face in a classroom by permitting instructors and students to share presentations on an interactive whiteboard, express emotions through emoticons, participate in group activities in break out rooms, etc. Synchronous Virtual Classrooms are software applications that bring human interaction into the virtual classroom through facial expressions, vocal intonations, hand gesticulation, and real-time discussion (Wimba, 2009a). There are a variety of synchronous virtual classrooms (Adobe Connect, Saba Centra, Elluminate Live, Horizon Wimba, Dim Dim, Learn Linc, Microsoft Live Meeting, Webex, Wiziq, etc.). They are also referred to as synchronous learning systems or collaborative electronic meeting rooms (Table 1). Virtual Classroom features can be grouped into three categories based on their application: (1) discussion and interaction facilitated by breakout rooms, emoticons, chats, videos, presentations, Table 1. Synchronous virtual classroom products Product
Website
Adobe Connect
http://www.adobe.com/products/ acrobatconnectpro/
Saba Centra
http://www.saba.com/products/centra/
Elluminate Live
www.elluminate.com
Horizon Wimba
www.horizonwimba.com
Dim Dim
http://www.dimdim.com/
LearnLinc
http://www.ilinc.com/products/suite/ learnlinc
Microsoft Live Meeting
http://office.microsoft.com/en-us/ livemeeting/
Webex
www.webx.com
Wiziq
http://www.wiziq.com/
250
polls, quizzes, and surveys; (2) instruction and reinforcement implemented through the electronic whiteboard, application sharing, and the content area; and (3) classroom management tools that include the ability to upload and store documents, an auto-populated participant list, usage details, and archive options. The software can be integrated into course management systems such as Blackboard. Additionally, it accommodates diverse learners (e.g. it is accessible to the hearing and visually impaired) and types of learning (e.g., auditory, visual, tactile). There is also a telephone number for participants to dial-in, which increases its reach/functionality (Wimba, 2009b).
Advantages of Synchronous Virtual Classroom Technologies Researchers have found that one of the major challenges in online education is including interactivity (Muirhead, 2004; Keefe, 2003). This need for interaction has resulted in developing guidelines for online courses. Adding synchronous components to online courses can enhance meaningful interactions (Repman, Zinskie & Carlson, 2005). With the introduction of virtual classroom technologies in the market, there is a cost-effective synchronous delivery in online courses which were initially made possible only with video conferencing technologies. Student-Student interactions and StudentInstructor interactions both provide the learner with guidance and support. Students feel the need to be part of a learning community where they feel involved and have a social presence. They are motivated by receiving immediate feedback from the instructor as well as their peers. Collis (1996) lists motivation, telepresence, good feedback and pacing as the four major advantages of using synchronous systems. Park and Bonk (2007) list the following as the major benefits of using a virtual classroom: providing immediate feedback, encouraging the exchange of multiple perspectives, enhancing dynamic interactions
Measuring Success in a Synchronous Virtual Classroom
among participants, strengthening social presence, and fostering the exchange of emotional supports and supplying verbal elements.
MEASURING SUCCESS IN ONLINE EDUCATION A number of factors contribute to the success of online courses. Phipps and Merisotis (2000) provided a comprehensive list of benchmarks for measuring success in internet based online courses. The benchmarks include Institutional support, Course development, Teaching/Learning Success, Course structure, Student support, Faculty support, Evaluation, and Assessment. Meanwhile, Lockee, Moore, and Burton (2002) presented evaluation strategies for measuring success in distance education, which consist of Program inputs, Performance outcomes, Attitude outcomes, Programmatic outcomes, and Implementation concerns. Implementation of these factors varies depending on whether or not an online course is taught asynchronously or synchronously. Consistent with the benchmarks and strategies provided above, the success of online education concerns many aspects, some related to student learning and achievement, others related to learning environment and delivery method. All of these factors aim to contribute to the ultimate outcome which is student learning and achievement. Consequently, any assessment of distance learning needs to take into account the many different aspects and how they interact to produce the ultimate outcome. While scholars (Rovai, 2000; Moallem, 2009a) argue that assessment principles for online education should not be different than face-to-face education, there is common agreement that implementation and focus may differ. Factors such as delivery vehicle and learner technological skill, learner interaction, engagement, and identity might be key factors in an online course assessment but less in a traditional face-to-face course.
While any education program will be impacted by its environment (Moallem, 2009a), a differentiating characteristic of online education is that it involves a number of logistical, organizational, or infrastructural factors that need to work together to impact a great deal of its success. For example, while poor technical support or lack of communication between technical support and instructional staff may impact traditional course delivery, these same factors will definitely have a more devastating impact on online education. Furthermore, the flexibility to make adjustments and corrections in the middle of instruction and the visual cues available to instructors of face-toface courses is not as readily available to online education instructors, who consequently need to carefully assess each phase of an online education program or course (from design, to development, etc.) upfront in order to guarantee success. The Quality Matters initiative is a good example of this characteristic of online education (QualityMatters, 2010). These factors make measuring the success of online education more challenging and complex than traditional education programs. Regardless of the model or evaluation strategies used, measuring the success of online education should be based on an ongoing, integral, and continuous approach (Moallem, 2009b) in order to address the challenges raised by the lack of face-to-face interaction. Measuring the success of online education will also have to take into account learner characteristics and attitudes towards the delivery system, learning environment, and ongoing communication between instructors and students, through immediate feedback. In a study of assessment practices for online courses, Kim, Smith, and Maeng (2008) reported that immediate feedback, self assessment, and team and peer assessment are some of the main features that could be easily integrated as unique features of online education assessment. While these features can be available in a traditional course, they might be easier to implement in an online course or program. Commenting on the importance and
251
Measuring Success in a Synchronous Virtual Classroom
effectiveness of immediate feedback in online education, versus the more likely delayed feedback in traditional courses, Kim, Smith and Maeng (2008) said “Compared to the traditional instruction environment, the online learning environment made this central role of feedback achievable in terms of time and access to information”(p.18). Researchers have investigated the online education success prediction model based on the learner attitude, characteristics, and background (Hall, 2008; Roblyer & Davis). Understanding these factors will only help make our efforts to measure success of online education more valuable and integrated so that online education can be better tailored to learners’ needs. In other words, the online education effectiveness model needs to be based on a systemic approach that integrates multiple dimensions centered on a student and instructor centered approach.
MEASURING SUCCESS IN SYNCHRONOUS VIRTUAL CLASSROOM In this section, we examine measures of success in synchronous online education using the dimensions (human, design, and non-human) provided by the editors, Eom and Arbaugh. In the next section we propose a conceptual model for an effective synchronous online class and strategies for researching the SVC.
Human Dimension Human interaction is the key for success in any classroom, both face-to-face and online. In an asynchronous online setting, the human element might be missing or at times minimal, whereas in a synchronous online setting, such as the virtual classroom, the human element is significant and adds to the success of the class. Bandi-Rao, Holmes and Davis (2008) discuss ways to maintain the human element in an online college writing
252
class. Rodriguez and Nash (2004) conclude that the area where technology and humans intersect proves to be most critical to the success and quality of adult degree programs. Garrison, Anderson and Archer (2000) introduced a model of community inquiry that constitutes three elements essential to an educational transaction - cognitive presence, social presence, and teaching presence. They listed defining and initiating discussion topics, sharing personal meaning, and focusing discussion as the indicators for measuring teaching presence in an asynchronous text-based communication. The human elements that contribute to the success in a synchronous virtual classroom can be categorized into three groups – Instructor, Students, and Technology Support.
Instructor • • • •
Subject matter knowledge Setting clear goals and objectives Engagement and facilitation skills Readiness and preparedness to using technology
Being an expert in the subject matter, setting clear goals and objectives for the class, engaging the students, readiness, and preparedness to technology are some of the characteristics that are seen in effective teachers/instructors. While these are necessary skills for any instructor, the instructor’s engagement, facilitation skills, and preparedness to delivery method play a major role in an online class. Due to a shift to online education, the instructor’s role has become more of a facilitator than a traditional lecturer. The instructor in the virtual classroom setting has more opportunities to lecture and interact with students compared to an instructor in an asynchronous online setting. However, the level of interaction is different from being in a traditional face-to-face class as the students are separated by distance. Even if video conferencing with web cameras is used, it is still difficult to perceive the feelings
Measuring Success in a Synchronous Virtual Classroom
of the students during the entire synchronous online session. Thus, the instructors’ engagement and facilitation skills are very important in a synchronous classroom. Instructors’ have to be proficient to use the virtual synchronous technology. Their preparedness in using the technology plays an important role in the success of the class. Instructors who are proficient in the synchronous technology are able to overcome minor technological glitches that they might encounter during an online session. However, others who are not proficient with technology may be nervous about using synchronous technologies, or if they do try they may be discouraged if their first attempts are unsuccessful. Instructors also need to model for their students demonstrating how to communicate efficiently and effectively using synchronous communication tools.
Students • •
Motivation Readiness and preparedness to delivery method
Just as an instructor has to make adjustments and be ready to teach in a synchronous setting, the students also have to be prepared for the changing demands related to online learning with respect to technology, pedagogical practices, and social roles. Vonderwell and Savery (2004) examine and discuss student roles and responsibilities for learning online and strategies to promote student readiness. Students usually take an online class because of the “anytime” and “anywhere” convenience it offers. They tend to sign up for the asynchronous courses where they can work at their own pace. However, learners may not feel obligated or pressured to participate in online communications when they do not see each other (Palloff & Pratt, 1999). Students have to be motivated to enroll in an online or blended course that requires synchronous meetings. While adult learners may
respond to external motivators such as increased pay and rewards, internal motivators such as job satisfaction and self-esteem are more important in providing them with a reason to learn (Fidishun, 2010). If these motivators can be integrated within technology-based instruction, adults will respond more positively. Enrolling in an online synchronous course makes the learner take an active role and participate in the synchronous sessions. Students who participate in the synchronous virtual setting also have to be prepared to use the synchronous technology. They have to become proficient in the use of the virtual classroom to participate completely in the synchronous virtual class. An instructor can encourage student readiness by designing experiences where the student will encounter a need for the knowledge or skill presented (Fidishun, 2010). Parasuraman (2000) proposed a Technology Readiness Index (TRI), which measures the propensity to embrace and use new technologies for accomplishing goals in home life and at work. The TRI identifies four dimensions of technology beliefs that impact an individual’s level of techno-readiness. Two of the technology adoption dimensions are contributors (optimism and innovativeness), and the other two are inhibitors (discomfort and insecurity). A study conducted at Educause found that students rarely attribute technology related learning problems to their own limitations but rather to limitations on the part of the professor (ECAR, 2007).
Technology Support • •
Training Available Troubleshooting Support
Most of the synchronous online classrooms have technology support available on hand to troubleshoot. Technology support is also available for students and instructors before they decide to participate in the synchronous class. There is training that is available to the students ahead of time to help them be confident when they participate
253
Measuring Success in a Synchronous Virtual Classroom
on their own. Instructors are given an opportunity to attend training sessions to prepare themselves for virtual synchronous setting. Some instructors may also have technology support available if they teach the synchronous classes online from an on campus computer. Though most of the students login to the synchronous virtual classroom from the comfort of their own homes or from a different place other than the university, they may have access to technology support through phone or chat. Instructors are also encouraged to have technical support teams present at tutorial sessions. Technology support should also be available to answer emails or respond to telephone inquiries. Before the virtual sessions, the instructor should know who is responsible to provide technical support and share this information with students (Berge, 1995).
Interaction • • •
Instructor-to-student interaction Student-to-student interaction Interaction with content
Interaction is a vital component of online learning, and Northrup (2002) summarizes interaction as engagement in learning. Moore (1989) devised three different types of interaction learner-content interaction, learner-instructor interaction, and learner-learner interaction. Moore describes learner-content interaction as the defining characteristics of education as it is this process of intellectually interacting with the content that changes the learner’s understand, perspectives and cognitive structures of learners mind. The learner-instructor interaction is highly desirable as the instructor seek to stimulate or at least maintain the student’s interest in what is to be taught and motivate the student to learn. The learner–learner interaction, a challenging aspect in distance education is however extremely valuable resource for learning. This could include one-one interaction, group interaction with or without the presence of
254
Figure 1. Interaction within synchronous virtual classrooms
the instructor. Hillman, Willis and Gunawardena (1994) introduced the fourth type of interaction, interaction with technologies. They present the concept of learner-interface interaction and recommend instructional design strategies that will facilitate students’ acquisition of the skills needed to participate effectively in the electronic classroom (Figure 1). Fullford and Zhang (2003) found that perceived level of interaction and satisfaction appears to decline with increased exposure to interactive instructional television. In the virtual classroom, students can interact with each other, with instructors, and with online resources. Both instructors and students can act as facilitators and provide support, feedback, and guidance during live interaction (Khan, 2000).
Design Dimension The design elements that contribute to the success of the synchronous virtual classroom are instructional design and instructional strategies.
Instructional Design • • •
Course structure: Aligned objectives/ content/assessment) Systematic instructional design models Pre-planned instruction
Instructional Design is a system of developing well-structured instructional materials using
Measuring Success in a Synchronous Virtual Classroom
objectives, related teaching strategies, systematic feedback, and evaluation (Moore & Kearsley, 1996). It can also be defined as the science of creating detailed specifications for the design, development, evaluation, and maintenance of instructional material that facilitate learning and performance. The design process is important when creating well aligned and effective instruction which in turn helps ensure having an effective virtual synchronous session. Instructional material that is aligned with the course goals and objectives, with practice activities and room for immediate feedback, have to be designed ahead of time. Unlike in a face-to-face class where spontaneous instruction can be delivered, in a virtual setting pre-planned instruction turns out to be more effective and successful. Assessment items that are aligned with the objectives have to be designed, which this helps measure student learning. There are numerous instructional design models that can be used for the design process (eg., Dick and Carey Model, Morrison, Ross and Kemp Model, ASSURE Model, Rapid Prototyping Model, etc.).
Instructional Strategies • • • • •
Interactive PowerPoint’s to stimulate meaningful discussion Collaborative activities through breakout rooms Web search activities through external Web Links Desktop sharing for student presentations Interaction through text and audio chats, polling, emoticons
Instructional strategies play an important role in the synchronous virtual classroom. Researchers have identified instructional strategies that work in the classroom. Marzona, Pickering and Pollock (2001) in their book “Classroom instruction that works,” list eight different instructional strategies. Pitt (1996) identified ten instructional strategies for online learning (e.g., discussion, case study,
collaborative learning, etc.). Instructional strategies for online learning have to be thought through ahead of time as it is cumbersome to decide on spontaneous activities. Interactive Powerpoints have to be designed and loaded into the virtual classroom that can guide meaningful class discussion. Web links that can be shared with the students have to be readily available. Students can further explore these websites and participate in activities pertaining to them. Periodic desktop sharing gives the students an opportunity to demonstrate their own work. Breakout rooms make it possible for small group activities in the virtual classroom. Interaction can also be enhanced through private and public text, audio chats, frequent polling, and the use of emoticons.
Technology Dimension The technology elements that contribute to the success of the synchronous virtual class are technology access and features.
Technology Access • • •
Availability Uninterrupted access Easy set up and easy to use
Technology access is an important characteristic in order for the student and instructor to effectively participate in a synchronous virtual setting, especially in higher education. Unless the system is already available to them through the university, free of cost, instructors will not attempt to participate in it. Currently there are a few online synchronous systems such as Wiziq and DimDim that instructors can use free of cost; however, they do have limited functionality compared to the proprietary products. Students and instructors choose to use virtual classroom technology once they realize that it is a stable platform for the class to function without having to be disconnected multiple times, and this is made
255
Measuring Success in a Synchronous Virtual Classroom
possible with the advanced bandwidth available these days. A system that has easy set up without complex installation is also important. Finally, it has to be a user friendly system that any instructor or student can learn without extensive training.
Features • • • • • • • •
Content Frame Eboard Breakout rooms Text/audio chat Polling feature Emoticons Application sharing Archive feature
The features available in the synchronous virtual classroom also play an important role in the success of the class. Most of the virtual classroom technologies have a content frame to share the instructor’s Powerpoints, an eboard where an instructor can write, breakout rooms for group activities, text chat for the instructor and other students in the class to interact, and audio chat to talk via a microphone or telephone with the instructor and other students. There are also features where students can share their feelings using emoticons. Instructors can administer student polls, share their desktop, or have the students share their desktops through application sharing. Websites can be displayed for students, and with the stable internet bandwidth, webcams can be used where students and instructors can see each other. The entire virtual classroom session can be archived, which the students can view later. In the latest versions, students can download the archived class sessions as an MP3 or MP4 file. Students with audio difficulties also have back up telephone numbers to dial in. Instructors have the ability to restrict or provide students with access to the eboard, share their web cam, or to talk via the microphone.
256
SYNCHRONOUS VIRTUAL CLASSROOM (SVC) SYSTEMIC MODEL This proposed model will be based on systems theory. Each of the three dimensions of the model is depicted as a subsystem. These subsystems are interconnected (human, technology, and design). For each subsystem there are specific inputs that contribute to the learning process within the virtual classroom. The outputs (student engagement, student product, and student satisfaction) are the tangible products resulting from course activities. They serve as evidence of progress towards the learning outcomes which are the changes in new knowledge, skills, and attitude occurring as a result of taking a course, on any topic or discipline, in a virtual classroom setting. Student engagement in the virtual classroom can be looked at in many different ways. Viewing chat transcripts or listening to the archive to see how interactive the student has been both in the main room and even in the breakout rooms during collaborative activities is one way it can be achieved. It can also be measured by their participation through answering or asking questions while employing the various virtual classroom features such as the raising of hands or responding using emoticons. Student engagement can also be assessed by student responses to the polling features that instructors might use periodically in the classroom to maintain interactivity. Engagement can also be measured from student presentations, which can be shared using desktop sharing functionality. Student Products are direct evidence of the learning that happens in the virtual class session. These could include student artifacts such as presentations (e.g., individual and collaborative) or other group work products (e.g., writing papers, designing rubrics, developing an argument, analyzing a case). Student Satisfaction is the natural consequence of student engagement and student products.
Measuring Success in a Synchronous Virtual Classroom
Figure 2. Synchronous virtual classroom systemic model
This, too, can be measured in different ways. You can periodically poll students to measure their satisfaction by asking targeted questions on course content, course progress, etc. A web link with brief surveys can be pushed out through the virtual class room to receive immediate feedback on student satisfaction. Students can also respond to questions orally or via the text chat feature. The challenge here is that most of the functionality does not allow the user to maintain anonymity. The outputs (student engagement, student products, and student satisfaction) mentioned above, constitute evidence that the instructor might use to evaluate or assess student progress towards final course outcomes. For example, students’ presentation can give an instructor feedback on how well students are moving closer or farther away from learning a new concept. These outputs could be in different types in the form of body of knowledge (e.g., theoretical concept), skill (e.g., ability to use a development tool) and attitude, (e.g., reaction to technology).
STRATEGIES FOR EMPIRICALLY RESEARCHING THE SVC Scholars use a progression of research methods to build on previous knowledge in order to advance the field (Stanovich & Stanovich, 2003). The progression from descriptive techniques to
those that allow stronger causal inferences enable researchers, policymakers, designers, and adopters to be certain of the relative value of the innovation (Bernard, et al., 2004). Following this logic, the SVC can be examined using a variety of research methods such as case studies, survey research, and correlational studies. Collectively this information can be used to design a comparative study that tests the SVC model. “It is only under these circumstances that we can push forward our understanding of the features of distance education and classroom instruction that make them similar or different” (Smith & Dillon, 1999 as cited in Bernard et al., 2004, p. 382). Qualitative research (e.g., case studies, narratives, etc.) can be useful in the early stages of an investigation because they lay the groundwork and help researchers identify directions for future studies (Gersten, 2001). Qualitative studies can help us understand the phenomena of interest, in this case the SVC model. This may involve interviewing (individuals or groups), conducting observations within the Virtual Classroom, reviewing archived sessions, or administering open-ended surveys to instructors and students who use this media. In a recent study McBrien, Jones, and Cheng (2009) used an open-ended survey to collect qualitative data from 80 students on their experiences in the virtual classroom. This yielded six themes related to dialogue, structure, convenience, technical issues, pedagogical preferences, and learner au257
Measuring Success in a Synchronous Virtual Classroom
tonomy. Meanwhile, Kirkpatrick (2010) uses a case study to assess the pedagogical value of the chat-feature in a virtual class. Similarly, Hennessy, Deaney, Ruthven, and Winterbottom (2007) use a case study to understand how pedagogy is developing in response to interactive white boards. These authors demonstrate how qualitative investigations can yield thick description that help us conceptualize variables by drawing attention to unrealized aspects or sharpen our understanding of participant’s perspectives (Stanovich & Stanovich, 2003) as they relate to different dimensions of the model. The dimensions of the SVC model can also be examined by using quantitative methods that draw information from larger samples. This corresponds with Stanovich and Stanovich’s (2003) recommendation to shift from qualitative methods of inquiry to other designs in order to establish a convergence of evidence. Quantitative methods include the use of attitude scales and surveys. For example, Liaw, Huang, and Chen (2007) used a survey they developed on learner attitudes toward e-learning. In comparison, some researchers combine existing scales in one questionnaire (Tung & Deng, 2007; Rovai & Wighting, 2005) for their studies or they use an existing scale and add their own questions (Cameron, Morgan, Williams, & Kostelecky, 2009). Notwithstanding, any instrument albeit an existing scale, modified, or newly developed should be valid and reliable. Reliability coefficients should be at least.70 to be considered acceptable (Nunnally, 1978). While there are different forms of validity, construct validity is by far the most important (Gay, Mills, & Airasian, 2006). For instruments where validity has not been established exploratory factor analysis can be conducted to validate constructs. In subsequent uses of the instrument confirmatory factor analysis can be used. In each instance, internal-consistency reliability analysis can be used to determine the reliability of the scales. Scales that correspond with the attributes in the SVC model will need to be identified, revised, or created.
258
Correlational designs can be used to understand the degree to which variables in the SVC model are related and their statistical significance. Basic correlational designs involve bivariate correlations or linear regression. Correlational designs become increasingly complex with additional variables. Advanced correlational techniques such as multiple regression, path analysis, and structural equation modeling allow the researcher to examine multiple variables simultaneously and allow for the partial control of other variables. An example of a correlational study is Liaw, Huang, and Chen’s (2005) research on instructor and learner attitudes toward e-learning in China. They use correlations and stepwise multiple regression. In terms of specific variables they use perceived self-efficacy, usefulness, enjoyment, and behavioral intention to use e-learning to predict instructor’s attitudes. Another multiple regression is performed using other variables to predict learner attitudes. Scholars have used a variety of correlational techniques such as canonical correlations and ordinary least square regression analysis to examine technology and learners (Rovai & Whiting, 2005; DuFrene, Lehman, Kellermanns, & Pearson, 2009). Path analysis and structural equation modeling may be especially helpful in testing the SVC model because of its many facets (human, technology, and design dimensions). Path analysis allows the researcher to examine variables that can be measured, while the latter incorporates latent variables (those that cannot be measured directly). However, in both instances diagrams are used to depict the model and software such as AMOS or LISEREL can be used to test variables in the model (Loehlin, 2004). Although Raaij and Schepers (2008) use PLS (modeling software designed for small samples) they expanded the Technology Assistance Model (TAM2) to explain individual student differences in their level of acceptance and use of virtual learning environments. The constructs in their model were computer anxiety, personal innovativeness, perceived ease of use, perceived usefulness, subjective norms, and in-
Measuring Success in a Synchronous Virtual Classroom
tensity of use. Among the results they found no direct effect for ease of use, which corresponded with previous findings. Typically, correlational studies are high in external validity, but low in internal validity. In contrast, experimental designs are high in internal validity, but low in external validity because of the difficultly replicating experimental conditions in real-life, particularly in educational settings. However, because correlational designs cannot establish causality experimental designs are often used at later stages of theory testing. Experimental designs include which include true experiments, causal comparative and quasi-experimental investigations. Each design has its advantages and disadvantages. Causal comparative designs lack a control and experimental group, whereas quasi-experimental designs lack random assignment. Both of these limitations hinder the ability to establish a causal relation definitely (Stanovich & Stanovich, 2003). In each of these experimental designs the affect of the independent variable (the cause) on the dependent variable (the outcome) is examined. In order to establish an effect, the independent variables are isolated through manipulation and everything else is held constant. This coupled with randomization, wherein participants are assigned to a control or treatment group, allows researchers to compare the results from each group to rule out alternative explanations. In doing so, researchers can confirm or disconfirm a causal theory (Stanovich & Stanovich, 2003). Lee and Kang (2005) provide an example of an experiment involving three groups (two instructional groups and a control group). In their study they examine the effectiveness of Intranet-Based instruction on perceived usefulness and achievement. In comparison, Tung and Deng (2007) use a causal comparative design to understand how students of different genders react to dynamic and static emoticons. In both instances, statistical analysis involved Analysis of Variance (ANOVA). Similar
studies can be conducted with various attributes in the SVC model. Smith and Dillon (1999) argue that comparative studies are only useful when “there is full analysis of media attributes and their hypothesized effects on learning, and when these same attributes are present and clearly articulated in the comparison conditions” (as cited in Bernard et al., p. 382). They suggest that researchers use this level of analysis and provide a clear account of the similarities and differences between treatment and control groups. Stanovich and Stanovich (2003) aptly acknowledge that a well designed experiment may test one or two theories appropriately, but may be poorly designed to test rival theories. Subsequent research may be deemed necessary in order to eliminate some explanations in support of others. In summary, a case study may generate a hypothesis that can be investigated in further study. To build on the case study a researcher may use a correlational design to verify the link between the variables with a larger sample. Then experimental designs can be employed to confirm or disconfirm causal relationships between variables. With comparative information researchers can eliminate alternative theories and explanations (Stanovich & Stanovich, 2003; Smith & Dillon, 1999). Qualitative research, survey research, correlational studies, and experimental designs all have their strengths and weaknesses. This does not overshadow their utility in educational research. Replication of any design is important, therefore research methodology and findings must be presented in a way that others can conduct the same study and obtain the same results. The body of converging evidence for the SVC model will strengthen the conclusions that can be drawn (Stanovich & Stanovich, 2003). Understanding the nature and extent of the impact of the SVC on important outcomes will give credibility to this instructional method (Bernard et al., 2004). Concomitantly, in order to advance research of online and blended learning it is recommended that we review and incorporate information from
259
Measuring Success in a Synchronous Virtual Classroom
other disciplines. For example, in the last decade, a vast array of research on online learning has been conducted in business education. By integrating literature in our respective disciplines we can benefit from sharing methodological and analytical approaches, theoretical and conceptual frameworks that explain phenomena, and additional evidence to guide administrators and technicians when they are making decisions regarding the design, emphasis, and implementation of technology based tools such as the SVC (Arbaugh et al., 2009).
CONCLUSION This chapter will benefit those who teach using the synchronous virtual classroom. The SVC model provided will help instructors design online courses including the factors that are needed for students to be successful. This model will also help virtual classroom instructors and managers develop a systematic way of identifying and addressing the external and internal factors that might impact the success of their instruction. The strategies for testing this model empirically will benefit instructors, researchers, non-profit and profit organizations, and academia.
REFERENCES Allen, I., & Seaman, J. (2008). Staying the course. Online education in the United States, 2008. Needham, MA: The Sloan Consortium. Arbaugh, J. B., Godfrey, M. R., Johnson, M., Leisen Pollack, B., Niendorf, B., & Wresch, W. (2009). Research in online and blended learning in the business disciplines: Key findings and possible future directions. The Internet and Higher Education, 12(2), 71–87. doi:10.1016/j. iheduc.2009.06.006
260
Bandi-Rao, S., Radtke, J., Holmes, A., & Davis, P. (2008). Keeping the human element at the center college-level writing online: Methods and materials. In C. Bonk, et al. (Eds.), Proceedings of World Conference on E-Learning in Corporate, Government, Healthcare, and Higher Education 2008 (p. 25). Chesapeake, VA: AACE Berge, Z. L. (1995). Facilitating computer conferencing: Recommendations from the field. Educational Technology, 35(1), 22–30. Bernard, R. M., Abrami, P. C., Lou, Y., Borokhovski, E., Wade, A., & Wozney, L., W… Huang, B. (2004). How does distance education compare with classroom instruction? A meta-analysis of the empirical literature. Review of Educational Research, 74(3), 379–439. doi:10.3102/00346543074003379 Cameron, B. A., Morgan, K., Williams, K. C., & Kostelecky, K. L. (2009). Group projects: Student perceptions of the relationship between social tasks and a sense of community in online group work. American Journal of Distance Education, 23, 20–33. doi:10.1080/08923640802664466 Collis, B. (1996). Tele-learning in a digital world: The future of distance learning. London, UK: International Thompson Computer Press. DuFrene, D. D., Lehman, C. M., Kellermanns, F. W., & Pearson, R. A. (2009). Do business communication technology tools meet learner needs? Business Communication Technology Quarterly, 72(2), 146–162. doi:10.1177/1080569909334012 ECAR. (2007). The ECAR study of undergraduate students and information technology, (study 6). EDUCAUSE Center for Applied Research. Fidishun, D. (2010). Andragogy and technology: Integrating adult learning theory as we teach with technology. Retrieved on March 25th, 2010 from http://frank.mtsu.edu/ ~itconf/ proceed00/ fidishun.htm
Measuring Success in a Synchronous Virtual Classroom
Fullford, C. P., & Zhang, S. (1993). Perceptions of interaction: The critical predictor in distance education. American Journal of Distance Education, 7(3), 8–21. doi:10.1080/08923649309526830
Khan, B. H. (2000). Discussion of resources and attributes of the Web for the creation of meaningful learning environments. CyberPyschology & Behavior, 3(1), 17–23. doi:10.1089/109493100316193
Garrison, D. R., Anderson, T., & Archer, W. (2000). Critical inquiry in a text-based environment: Computer conferencing in higher education. The Internet and Higher Education, 2, 87–105. doi:10.1016/S1096-7516(00)00016-6
Kim, N., Smith, M. J., & Maeng, K. (2008). Assessment in online distance education: A comparison of three online programs at a university. Online Journal of Distance Learning Administration, 11(1). Retrieved from http://www.westga.edu/ ~distance/ ojdla/ spring111/ kim111.html.
Gay, L. R., Mills, G. E., & Airasian, P. (2006). Educational research: Competencies for analysis and applications (8th ed.). Upper Saddle River, NJ: Pearson Prentice Hall. Gersten, R. (2001). Sorting out the roles of research in the improvement of practice. Learning Disabilities Research & Practice, 16(1), 45–50. doi:10.1111/0938-8982.00005 Hall, M. (2008). Predicting student performance in Web-based distance education courses based on survey instruments measuring personality traits and technical skills. Online Journal of Distance Learning Administration, 11(4). Retrieved from http://www.westga.edu/ ~distance/ ojdla/ fall113/ hall113.html. Hennessy, S., Deaney, R., Ruthven, K., & Winterbottom, M. (2007). Pedagocial stragies for using the interactive whiteboard to foster learner participation in school science. Learning, Media and Technology, 32(3), 283–301. doi:10.1080/17439880701511131 Hillman, D. C. A., Willis, D. J., & Gunawardena, C. N. (1994). Learner-interface interaction in distance education: An extension of contemporary models and strategies for practitioners. American Journal of Distance Education, 8(2), 30–42. doi:10.1080/08923649409526853 Keefe, T. J. (2003). Using technology to enhance a course: The importance of interaction. EDUCAUSE Quarterly, 1, 24–34.
Kirkpatrick, G. (2010). Online chat facilities as pedagogic tools: A case study. Active Learning in Higher Education, 6(2), 145–159. doi:10.1177/1469787405054239 Lee, D., & Kang, S. (2005). Perceived usefulness and outcomes of intranet-bases learning (IBL): Developing asynchronous knowledge systems in organizational settings. Journal of Instructional Psychology, 32(1), 68–73. Liaw, S., Huang, H., & Chen, G. (2007). Surveying instructor and learner attitudes toward elearning. Computers & Education, 49, 1066–1080. doi:10.1016/j.compedu.2006.01.001 Lockee, B., Moore, M., & Burton, J. (2002). Measuring success: Evaluation strategies for distance education. EDUCAUSE Quarterly, 25(1), 20–26. Retrieved from http://www.educause.edu/ ir/ library/ pdf/ eqm0213.pdf. Loehlin, J. C. (2004). Latent variables: An introduction to factor, path, and structural equation modeling. Mahwah, NJ: Lawrence Erlbaum Associates. Marzano, R. J., Pickering, D. J., & Pollock, J. E. (2001). Classroom instruction that works: Research-based strategies for increasing student achievement. Alexandria, VA: Association for Supervision and Curriculum Development.
261
Measuring Success in a Synchronous Virtual Classroom
McBrien, J. L., Jones, P., & Cheng, R. (2009). Virtual spaces: Employing a synchronous online classroom to facilitate student engagement in online learning. International Review of Research in Open and Distance Learning, 10(3), 1–17. Moallem, M. (2009a). Assessment of complex learning outcomes in online learning environments. In Rogers, P., Berg, G. A., Boettecher, J. V., Howard, C., & Justice, L. (Eds.), Encyclopedia of distance learning (2nd ed.). Hershey, PA: IGI Global. Moallem, M. (2009b). The efficacy of current assessment tools and techniques for assessment of complex and performance-based learning outcomes in online learning. In Rogers, P., Berg, G. A., Boettecher, J. V., Howard, C., & Justice, L. (Eds.), Encyclopedia of distance learning (2nd ed.). Hershey, PA: IGI Global. Moore, M. G. (1989). Editorial: Three types of interaction. American Journal of Distance Education, 3(2), 1–7. doi:10.1080/08923648909526659 Muirhead, B. (2004). Encouraging interactivity in online classes. International Journal of Instructional Technology and Distance Learning, 2(11). Retrieved from http://itdl.org/ Journal/ Jun_04/ article07.htm.
Park, Y. J., & Bonk, C. J. (2007). Is life a Breeze?: A case study for promoting synchronous learning in a blended graduate course. Journal of Online Learning and Teaching, 3(3), 307–323. Phipps, R., & Merisotis, J. (2000). Quality on the line: Benchmarks for success in Internet-based distance education. Institute for Higher Education Policy. Retrieved from http://www.eric.ed.gov/ ERICDocs/ data/ ericdocs2sql/ content_storage_01/ 0000019b/ 80/ 16/ 67/ ba.pdf Pitt, T. J. (1996). The multi-user object oriented environment: A guide with instructional strategies for use in adult education. Unpublished manuscript. QualityMatters. (2010). Quality Matters: Interinstitutional quality assurance in online learning. Retrieved from http://www.qualitymatters.org/ Raaij, E. M., & Schepers, J. J. L. (2008). The acceptance and use of a virtual learning environment in China. Computers & Education, 50, 838–852. doi:10.1016/j.compedu.2006.09.001 Repman, J., Zinskie, C., & Carlson, R. (2005). Effective use of CMC tools in interactive online learning. Computers in the Schools, 22(1/2), 57–69. doi:10.1300/J025v22n01_06
Nunnally, J. (1978). Psychometric theory. New York, NY: McGraw-Hill.
Robler, M. D., & Davis, L. (2008). Predicting success for virtual school students: Putting research-based models into practice. Online Journal of Distance Learning Administration, 11(4). Retrieved from http://www.westga.edu/ ~distance/ ojdla/ winter114/ roblyer114.html.
Palloff & Pratt. (1999). Building learning communities in cyberspace: Effective strategies for the online classroom. San Francisco, CA: JosseyBass Publishers.
Rodriguez, F. G., & Nash, S. S. (2004). Technology and the adult degree program: The human element. New Directions for Adult and Continuing Education, 103, 73–79. doi:10.1002/ace.150
Parasuraman, A. (2000). Technology readiness index (TRI): A multiple-item scale to measure readiness to embrace new technologies. Journal of Service Research, 2(4), 307–320. doi:10.1177/109467050024001
Rovai, A. P. (2000). Online and traditional assessments: What’s the difference? The Internet and Higher Education, 3, 141–151. doi:10.1016/ S1096-7516(01)00028-8
Northrup, P. T. (2002). Online learners’preferences for interaction. The Quarterly Review of Distance Education, 3(2), 219–226.
262
Measuring Success in a Synchronous Virtual Classroom
Rovai, A. P., & Wighting, M. (2005). Feelings of alienation and community among higher education students in a virtual classroom. The Internet and Higher Education, 8, 97–110. doi:10.1016/j. iheduc.2005.03.001 Smith, P. L., & Dillon, C. L. (1999). Comparing distance learning and classroom learning: Conceptual considerations. American Journal of Distance Education, 13, 107–124. doi:10.1080/08923649909527020 Stanovich, P. J., & Stanovich, K. E. (2003). Using research and reason in education: How teachers can use scientifically based research to make curricular & instructional decisions. Portsmouth, NH: RMC Research Corporation. Tung, F., & Deng, Y. (2007). Increasing social presence of social actors in e-learning environments: Effects of dynamic and static emoticons on children. Displays, 28, 174–180. doi:10.1016/j. displa.2007.06.005 Vonderwell, S., & Savery, J. (2004). Online learning: Student roles and readiness. The Turkish Online Journal of Educational Technology, 3(3), 38–42. Wimba. (2009a). Wimba for higher education. Retrieved from http://www.wimba.com/ solutions/ higher-education/ wimba_classroom_for_higher_education Wimba. (2009b). Bring class to life. Retrieved from http://www.wimba.com/products/wimba_ classroom
ADDITIONAL READING Alavi, M., Wheeler, B. C., & Valacich, J. S. (1995). Using IT to re-engineer business education: An exploratory investigation of collaborative telelearning. Management Information Systems Quarterly, 19(3), 293–312. doi:10.2307/249597
Allan, B. (2007). Time to Learn?: E-learners’ experiences of time in virtual learning communities. Management Learning, 38(5), 557–573.. doi:10.1177/1350507607083207 Allen, I., & Seaman, J. (2008). Staying the Course. Online Education in the United States. Needham, MA: The Sloan Consortium. Sloan-C. Anyanwu, C. (2003). Myth and Realities of New Media Technology: Virtual Classroom Education Premise. Television & New Media, 4(4), 389–409. doi:10.1177/1527476403256210 Arbaugh, J. B. (2000a). Virtual classroom versus physical classroom: An exploratory study of class discussion patterns and student learning in an asynchronous online MBA course. Journal of Management Education, 24(2), 213–233.. doi:10.1177/105256290002400206 Arbaugh, J. B. (2000b). Virtual classroom characteristics and student satisfaction with online MBA courses. Journal of Management Education, 24(1), 32–54..doi:10.1177/105256290002400104 Ardichvili, A. (2008). Learning and knowledge sharing in virtual communities of practice: Motivators, barriers, and enablers. Advances in Developing Human Resources, 10(4), 541–554.. doi:10.1177/1523422308319536 Bailey, E. K., & Cotlar, M. (1994). Teaching via the internet. Communication Education, 43(2), 184–193. doi:10.1080/03634529409378975 Bandi-Rao, S., Radtke, J., Holmes, A., & Davis, P. (2008). Keeping the Human Element at the Center College-Level Writing Online: Methods and Materials. In C. Bonk et al. (Eds.), Proceedings of World Conference on E-Learning in Corporate, Government, Healthcare, and Higher Education 2008 (p. 25). Chesapeake, VA: AACE.
263
Measuring Success in a Synchronous Virtual Classroom
Barnes, F. B., & Perziosi, R. C & Gooden, D. J. (2004). An examination of the learning styles of online MBA students and their preferred course delivery methods. New Horizons in Adult Education, 18(2), 19-30. Retrieved from http://education. fiu.edu/ newhorizons/ journals/ volume18no2Spring2004.pdf Bernard, R. M., Abrami, P. C., Borokhovski, E., Wade, A., Tamim, R. M., Surkes, M. A., & Bethel, E. C. (2009). A meta-analysis of three types of interaction treatments in distance education. Review of Educational Research, 79(3), 1243–1289. doi:10.3102/0034654309333844 Bielman, V., Putney, L., & Strudler, N. (2000). Constructing community in a postsecondary virtual classroom. Paper presented at the annual meeting of the American Educational Research Association, New Orleans, LA. Boardman, C., & Sundquist, E. (2009). Toward understanding work motivation: Worker attitudes and the perception of effective public service. American Review of Public Administration, 39(5), 519–535. doi:10.1177/0275074008324567 Bonk, C. J. (2009). The World is Open: How Web Technology is Revolutionizing Education. San Francisco, CA: Jossey-Bass/Wiley. Bonk, C. J., Wisher, R. A., & Lee, J. Y. (2003). Moderating Learner-Centered E-Learning: Problems and Solutions, Benefits and Implications. In Roberts, T. S. (Ed.), Online Collaborative Learning: Theory and Practice, 54-45. Hershey, Pa.: Idea Group Publishing. Bonk, C. J., & Zhang, K. (2008). Empowering online learning: 100+ activities for reading, reflecting, displaying, and doing. San Francisco, CA: Jossey-Bass.
Caldwell, E. R. (2006). A comparative study of three instructional modalities in a computer programming course: Traditional instruction, Web-based instruction, and online instruction. PhD dissertation, University of North Carolina at Greensboro. Campbell, M., Gibson, W., Hall, A., Richards, D., & Callery, P. (2008). Online vs. face-to-face discussion in a Web-based research methods course for postgraduate nursing students: A quasi-experimental study. International Journal of Nursing Studies, 45(5), 750–759. doi:10.1016/j. ijnurstu.2006.12.011 Clark, D, N., & Gibb, J. L. (2006). Virtual team learning: An introductory study team exercise. Journal of Management Education, 30(6), 765787. doi:10.1177/1052562906287969 Coleman, S. (2009). Why do students learn online? Retrieved from http://www.worldwidelearn.com/ education-articles/ benefits-of-online-learning. htm Cook, D. A., & McDonald, F. S. (2008). Elearning, is there anything special about the “e”? Perspectives in Biology and Medicine, 51(1), 5–21. doi:10.1353/pbm.2008.0007 Dineen, B. R. (2005). Teamxchange: A team project experience involving virtual teams and fluid team membership. Journal of Management Education, 29(4), 593–616..doi:10.1177/1052562905276275 Dirckinck-Holmfield, L., Sorensen, E. K., Ryberg, T., & Buus, L. (2004). A Theoretical Framework for Designing Online Master Communities of Practice. In Banks, S. (Eds.), Networked Learning, 267–73. Lancaster: University of Lancaster. Dumont, R. A. (1996). Teaching and learning in cyberspace. IEEE Transactions on Professional Communication, 39(4), 192–204. doi:10.1109/47.544575 Frisolli, G. (2008). Adult Learning. Retrieved from http://adultlearnandtech.com/ historyal.htm
264
Measuring Success in a Synchronous Virtual Classroom
Gilmore, S., & Warren, S. (2007). Themed article: Emotion online: experiences of teaching in a virtual learning environment. Human Relations, 60(4), 581–608..doi:10.1177/0018726707078351 Greenhow, C., Robelia, B., & Hughes, J. E. (2009). Learning, teaching, and scholarship in a digital age: Web 2.0 and classroom research: What path should we take now? Educational Researcher, 38(4), 246–259..doi:10.3102/0013189X09336671 Grosjean, G., & Sork, T. J. (2007). Going online: Uploading learning to the virtual classroom. New Directions for Adult and Continuing Education, 113, 13–24. doi:10.1002/ace.243 Halper, S., Kelly, K., & Chuang, W. H. (2007). A reflection on Coursestream System: A virtual classroom streaming system designed for large classes. TechTrends, 51(2), 24–27. doi:10.1007/ s11528-007-0022-z Hiltz, S. R., Johnson, K. D., & Turoff, M. (1986). Experiments in group decision making: Communication process and outcome in faceto-face versus computerized conferences. Human Communication Research, 13(2), 225–252. doi:10.1111/j.1468-2958.1986.tb00104.x Hysong, S. J., & Mannix, L. (2003, April). Learning outcomes in distance education versus traditional and mixed environments. Paper presented at the annual meeting of the Society for Industrial and Organizational Psychology, Orlando, Florida. Keefe, T. J. (2003). Using technology to enhance a course: The importance of interaction. EDUCAUSE Quarterly, 1, 24–34. Khan, B. H. (2000). Discussion of resources and attributes of the web for the creation of meaningful learning environments. CyberPyschology & Behavior, 3(1), 17–23. doi:10.1089/109493100316193 Knowles, M. (1984). Andragogy in action: Applying modern principles of adult education. San Francisco, CA: Jossey Bass.
Kraiger, K., & Jerden, E. (2007). A meta-analytic investigation of learner control: Old findings and new directions. In Fiore, S. M., & Salas, E. (Eds.), Toward a science of distributed learning (pp. 65– 90). Washington, DC:APA. doi:10.1037/11582-004 Machtmes, K., & Asher, J. W. (2000). A meta-analysis of the effectiveness of telecourses in distance education. American Journal of Distance Education, 14(1), 27–46. doi:10.1080/08923640009527043 McKinnie, R. (2008). Best practices for delivering virtual classroom training. White Paper, Adobe Systems Incorporated. McNamara, J. M., Swalm, R. L., Stearne, D. J., & Covassin, T. M. (2008). Online weight training. Journal of Strength and Conditioning Research, 22(4), 1164–1168. doi:10.1519/ JSC.0b013e31816eb4e0 Means, B., Toyoma, Y., Murphy, R., Bakia, M., & Jones, J. (2009). Evidence of evaluation based practices in online learning: A meta analysis and review of online learning studies. Retrieved from http://www.ed.gov/ rschstat/ eval/ tech/ evidencebased-practices/ finalreport.pdf Moore, M. G., & Kearsley, G. (1996). Distance Education: a Systems View. Belmont: Ca. Wadsworth Publishing Company. Northrup, P. T. (2002). Online learners’preferences for interaction. The Quarterly Review of Distance Education, 3(2), 219–226. Parker, M. A. & Martin, F. (2009). Using virtual classrooms: Student perceptions of features and characteristics in an online and a blended course. Journal of Online teaching and Learning, 6(1), 135-147. Poirier, C. R., & Feldman, R. S. (2004). Teaching in cyberspace: Online versus traditional instruction using a waiting-list experimental design. Teaching of Psychology, 31(1), 59–62. doi:10.1207/ s15328023top3101_11
265
Measuring Success in a Synchronous Virtual Classroom
Salas, E., Kosarzycki, M. P., Burke, C. S., Fiore, S. M., & Stone, D. L. (2002). Emerging themes in distance learning research and practice: Some food for thought. International Journal of Management Reviews, 4(2), 135–153. doi:10.1111/14682370.00081 Schwen, T. M., & Hara, N. (2004). Community of practice: A metaphor for online design. In S. A. Barab, R. Kling, and J. H. Gray, Designing for virtual communities in the service of learning (Eds.), 154–78. Cambridge, U.K.: Cambridge University Press. Scoville, S. A., & Buskirk, T. D. (2007). Traditional and virtual microscopy compared experimentally in a classroom setting. Clinical Anatomy (New York, N.Y.), 20(5), 565–570. doi:10.1002/ca.20440 Slavin, R. E. (2007). Educational research in an age of accountability. Boston, MA: Pearson Education, Inc. Webster, J., & Hackley, P. (1997). Teaching effectiveness in technology-mediated distance learning. Academy of Management Journal, 40(6), 1282–1309. doi:10.2307/257034
KEY TERMS AND DEFINITIONS Horizon Wimba: Horizon Wimba Virtual Classroom is a live, virtual classroom with audio,
266
video, application sharing, and content display. MP4 capabilities provide the option for students to download the Wimba archives in either an MP3 or MP4 audio file format. Instructional Design: Instructional Design is a system of developing well-structured instructional materials using objectives, related teaching strategies, systematic feedback, and evaluation. Interaction: Interaction can be defined as engagement in learning. Four different types of interaction are learner-content interaction, learnerinstructor interaction, and learner-learner interaction and learner–system interaction. Online Learning: Learning delivered by Webbased or Internet-based technologies. Synchronous Technologies: Synchronous Technologies are online environments that enable students and instructors to communicate synchronously using text chat, audio, and video. Virtual Classroom: Virtual classrooms are online environments that enable students and instructors to communicate synchronously using text chat, audio, and video. They enable faculty and students to interact as if they were face-to-face in a classroom by permitting instructors and students to share presentations on an interactive whiteboard, express emotions through emoticons, participate in group activities in break out rooms, etc.
267
Chapter 12
Factors Influencing User Satisfaction with InternetBased E-Learning in Corporate South Africa Craig Cadenhead University of Cape Town, South Africa Jean-Paul Van Belle University of Cape Town, South Africa
ABSTRACT This chapter looks at the factors that influence user satisfaction with Internet based learning in the South African corporate environment. An electronic survey was administered, and one hundred and twenty responses from corporations across South Africa were received. Only five of the thirteen factors were found to exert a statistically significant influence on learner satisfaction: instructor response towards the learners, instructor attitude toward Internet based learning, the flexibility of the course, perceived usefulness, perceived ease of use, and the social interaction experienced by the learner in assessments. Interestingly, four of those five were also identified as significant in a similar Taiwanese study, which provides an interesting cross-cultural validation for the findings, even though this sample was different and smaller. Perhaps surprisingly, none of 6 demographic variables exerted significant influence. Hopefully organisations and educational institutions can note and make use of the important factors in conceptualizing and designing their e-learning courses. DOI: 10.4018/978-1-60960-615-2.ch012
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
Factors Influencing User Satisfaction with Internet-Based E-Learning in Corporate South Africa
INTRODUCTION Background E-learning can provide a learning experience delivered by the Internet and multimedia presentation (Lau, 2000). This article focuses on Electronic learning (e-learning), as it is delivered through the Internet, a phenomenon that is growing fast (Wang, Wang & Shee, 2007). ICT based training is core to the development planning of most African governments (Elearning Africa, 2006). South Africa is placing importance on e-learning in the public, private and governmental sectors. However, South Africa does experience certain limitations when it comes to accessing Internet based learning. Technological issues such as bandwidth and access, coupled with limited customisation of the learning management systems and content for a South African market, contribute to these limitations (Van Der Spuy & Wöcke, 2003). With competitive markets and a strong focus on return on investment, the technology-based e-learning function in the corporate environment is evolving to be part of an organisation’s focus when looking at improving service and possible cost advantages. The mid-1990’s saw a new phase of technology-based e-learning resulting from the increased popularity in Internet and other web related technologies (Li, Lau, Shih & Li, 2008). Computers have become more powerful, leading to the Internet becoming a strong foundation for technology-based e-learning in an organisation (Hoffman, 2002). The learner and the organisation gain from factors such as standardised delivery of the learning as well as the ability and convenience for the learners to dictate the pace of their learning (Strother, 2002). Corporate strategies and characteristics are constantly being changed by technology, including providing further adult education for employees (O’Sullivan, 2000). E-learning, and specifically Internet based learning, has become a business asset that can enhance the learning capability of
268
an organisation and can result in economic savings and increased efficiency for the organisation (Strother, 2002). South African companies not only design and accredit their learning material according to national, legal and industrial standards but also adopt international generic standards (Dagada & Jakovljevic, 2004). Technology has altered organisational training and teaching so that these are no longer limited to the classroom and traditional teaching methods (Marold, Larsen & Moreno, 2000). An Internet based e-learning strategy within an organisation allows the employee to access learning content directly, irrespective of time or location (Brown, 2001). An e-learning system allows employees to schedule their own courses and examinations, as well as providing an environment where the learner can communicate and collaborate with other learners and/or the learning practitioners (Fry 2001; Stewart & Waight, 2005). Assessing the success of corporate learning can be determined by measuring learner satisfaction experienced by the participants using Internet based e-learning within their respective organisations and is critical in the evaluation of an organisation’s Internet based e-learning strategy (Wang, 2003).
Research Objective The growth of the Internet and e-learning initiatives shows the need for an improved understanding of the factors that influence learner satisfaction of learners using Internet based learning in South African corporations. Consequently, the objective of this research is to document the factors that influence user acceptance of Internet based learning in the South African corporate environment. The research instrument is based largely on that used in a study conducted in Thailand (Sun, Tsai, Finger & Chen, 2008). The instrument has been modified to accommodate a South African corporate implementation, so that it is more relevant and understandable by an audience with different
Factors Influencing User Satisfaction with Internet-Based E-Learning in Corporate South Africa
demographics. The results are compared to the findings of Sun et al. (2008). The paper briefly describes what Internet based learning is, how it differs from other forms of learning (such as more conventional forms of e-learning or the more traditional classroom methods), and how it is used in the South African corporate environment. The next section investigates six E-learning dimensions which have been used to predict learner satisfaction: learner, instructor, course, technology, design and environment. The six dimensions result in 13 variables which are hypothesized to be the independent variables which can explain perceived e-learner satisfaction. The research methodology is explained, followed by an explanation of how the data was analysed. Next, the testing of the hypotheses is discussed and the results are compared to those resulting from the study conducted by Sun et al. (2008). Lastly, this article summarizes the findings and highlights any particularly significant results that could lead to further exploration of this research.
LITERATURE REVIEW Overview of E-Learning Technology-based e-learning comprises using the Internet and other relevant technologies to produce learning material, regulate courses and teach learners in an organisation. “Internet based e-learning” is constantly evolving and advancing to become “the use of computer network technology, primarily over an Intranet or through the Internet, to deliver information and instruction to individuals” (Welsh, Wanberg, Brown & Simmering, 2003 p.246). Corporate e-learning benefits such as convenience and re-use can result in cost reduction and savings for the organisation (Fry, 2001DeRouin, Fritzsche & Salas, 2004). Utilising e-learning can provide an organisation with some advantages over using traditional classroom teaching methods. Firstly e-learning
supplies the ability to repeat the learning process without having to coordinate and organise lecturers and venues. E-learning can also be archived, to be recalled when and where it is needed depending on the requirements of the users. Internet based learning, is both cost effective for the learners and those for providing the learning, as it does not carry the same overheads of more traditional classroom based learning environments. (Zhang et al, 2004). Finally, completion of unified and up to date material by which to deliver the learning content can be achieved within small time frames (Macpherson, Eliot, Harris &Homan, 2004; Rabak & Cleveland-Innes, 2006). Numerous studies and models are based on Internet e-learning research regarding usersatisfaction and the factors that influence learners’ perceived satisfaction with e-learning. An excellent and recent overview of the research in this area was made by Arbaugh, Godfrey, Johnson, Pollack, Niendorf & Wresch (2009). This study examines the factors that influence learners’ perceived satisfaction with e-learning in the corporate environment. The model of dimensions and antecedents of perceived e-learner satisfaction that was judged as the most appropriate model for this study is that of Sun et al. (2008). This model was chosen as it is recent and the dimensions that are examined are considered relevant to this study. The model is made up of six dimensions containing 13 hypotheses (Sun et al., 2008). The model will now be discussed under the headings of the six dimensions: Learner, Instructor, Course, Technology and Design.
Learner Dimension Internet based learning enables the learner to perform tasks such as electronic work submission including assessment, collaboration through online discussion groups, virtual classrooms and being able to monitor the learner’s progress through online learning management software (Forster, Dawson & Reid, 2005; Nixon & Helms, 1997).
269
Factors Influencing User Satisfaction with Internet-Based E-Learning in Corporate South Africa
Gaining insight into the learners’ motivation and attitude towards using technology can determine the levels of e-learning utilisation within an organisation (Ong & Lai, 2006). The corporate learner’s attitude and understanding towards computers can be essential in determining whether there will be a negative or a positive participation by the learner (Fishbein & Ajzen, 1975, Muilenburg & Berge, 2005). An understanding of the makeup and culture of an organisation’s workforce can be an important element of the learner dimension (Arbaugh et al, 2009). The life experiences within the organisation would be defined by social and historical events that can determine the group’s attitudes and views within the workplace, including employees’ attitude to e-learning (Westerman & Yamamura, 2007). Adding to this is the diversity of the corporate environment in South Africa and the cultural factors that can affect the way in which employees with different backgrounds approach Internet e-learning. Factors such adapting the learning process/material in a way that it is flexible enough to cater for each employeés preferred learning style and communication preference are vital (Ardichvili, Maurer, Li, Wentling & Stuedemann, 2006). Perceived learner satisfaction can therefore be influenced by demographic variables such as gender, age, job role and organisational context. An individual’s negative feeling or phobia towards computers can result in an employee being unwilling to embrace anything to do with receiving anything online or via a computer (Yushau, 2006). Insufficient experience on Internet based platforms can result in the learner being confused and frustrated, creating an anxiety around the learning material and environment (Piccoli, Ahmad & Ives, 2001). Internet based learning can thus create a level of computer anxiety for the learner and have a negative influence on perceived corporate e-learning satisfaction The final aspect of the learner dimension to be examined is the Internet self-efficacy of the learner which can be translated into the learner’s belief
270
that they have the ability to undertake Internet based tasks through experience with elements such as site navigation and search engines. The goals that are set by an individual can be attributed to the learner’s self belief and their sense that they are capable and have the ability to achieve and reach the targets which they have set for themselves (Hodges, 2004). The Internet has become an extension of education and people are more adept at using the Internet to achieve their needs (Engelbrecht, 2003). The level of participation by the adult learner is influenced by their computer self efficacy (Pituch & Lee, 2006). Technologically proficient learners appear to embrace Internet based courses earlier than those who are not (Piccoli et al., 2001). Additionally computer skills, English language level and prior internet based learning experience can all influence e-learner satisfaction.
Instructor Dimension The relationship between a learning practitioner and a student is important, as responding to and providing feedback to the learner a key element of the learning process (Kenworthy & Wong, 2005). The ability of the practitioner to respond and grow to satisfy the student’s needs is important to the learner’s experience (O’Neil, Singh & O’Donoghue, 2004). The inability or the nonresponse of a lecturer can cause the learner additional frustration (Meier, 2007). Instructors can track and report on the learners over the Internet and integrate this into existing systems within the organisation (Hammer & Champy, 2001). Instructor response timeliness towards the learner can thus be hypothesized to have a positive influence on perceived corporate e-learning satisfaction. Corporate learning practitioners need to be able to adapt themselves to a technology based learning environment such as e-learning. The practitioner needs to actively embrace the medium of e-learning, as effort is required of the practitioner as s/he needs to become familiar with the
Factors Influencing User Satisfaction with Internet-Based E-Learning in Corporate South Africa
technology-based learning environment and be able to design and create electronic courseware. The instructor needs the ability, inclination and skills to adapt to the learning environment, and create support materials (Hamid, 2002). The ability to teach and knowledge of the subject matter are thus no longer the only key skills required. The instructor’s attitude toward Internet based e-learning can ultimately affect the learner’s experience (Falowo, 2007). The instructor needs to have a good attitude towards Internet based learning, especially in the South African corporate environment where the instructors are faced with further challenges due to the trans-cultural knowledge barriers and cultural diversity inherent to the population of South Africa (Meier, 2007).
Course Dimension The growth of the importance of the Internet within organisations has led to the function of technologybased e-learning moving from the more traditional classroom and the more conventional methods of e-learning, such as video and disc offerings. (Welsh et al, 2003). Traditional classroom learning can result in logistical problems for the student and lecturer (such as accessibility to courseware and logistical issues surrounding venues). Utilising e-learning reduces the time spent repeating learning and gives the learning processes flexibility in accommodating the preferences of both the practitioner and learner as well as saving on costs by reducing the amount of overheads required for more traditional learning. E-learning via the Internet allows the participants to repeat the process, combine software and learning modules from different sources, search for different offerings and material, as well as to manage the course themselves (Roffe, 2002). The time from updating learning materials to the delivery of the learning content can be achieved within small time frames (Macpherson et al., 2004; Rabak & Cleveland-Innes, 2006). Internet based learning satisfies the learners’ need for flexibilty with re-
gards to time and instruction venue/environment (Pituch & Lee, 2006). The learners benefit from the improved quality of courses, which are enhanced by not only technology such as multimedia software, the Internet, video-on-demand but also the ability to set up virtual classrooms extending the functionality to provide platforms to collaborate with course content, the practitioners and the other students (Roffe, 2002). The learners appreciate the ability to receive learning of a high quality (Yee, Luan, Ahmad & Mahmud, 2009). Thus, course quality has bearing on learner satisfaction.
Technology Dimension E-learning is enhanced by and has been developed extensively through advancements in technology (Falowo, 2007). The technological design and offering can often influence how a function is received and embraced within the organisation Technology facilitates access to learning giving greater control to the learner. (DeRouin, Fritzsche & Salas, 2005). The performance of hardware, such as the learner or instructor’s machine, or the availability of networks or bandwidth can directly affect the learner’s response to technology-based e-learning (Fry, 2001; Liu & Wang, 2009). Technology adoption is increasing as organisations overcome challenges such as limited bandwidth, quality and the associated costs. These factors are particularly pertinent when studied in a South African context. Problems that can arise for learners can be issues with accessibility to the computer facilities and downtime of the Internet (Cronje, 2008). An additional issue arising in the technology dimension are the costs associated with better quality of technology and the Internet (Van Der Spuy & Wöcke, 2003).
Design Dimension The Technology Acceptance Model (TAM) looks at two beliefs key to system adoption, that is,
271
Factors Influencing User Satisfaction with Internet-Based E-Learning in Corporate South Africa
Figure 1. The technology acceptance model (Source: Davis et al, 1989)
perceived ease of use and perceived usefulness (Davis, Bagozzi & Warshaw, 1989). Internet based e-learning, for those working in a corporate environment, creates the opportunity for learners to educate themselves without the logistical problems associated with traditional classroom learning. The framework as shown in Figure 1 contends that perceived ease of use and perceived usefulness of a system (in this case the e-learning system) will lead to use of the system (Pituch & Lee, 2006). Ongoing use of the system indicates satisfaction with the system (Davis et al., 1989). TAM looks at relationships between the targetsystem - in this case Internet based learning - and attitude, behavioural intention, perceived usefulness and the perceived ease of use (Pituch & Lee, 2006). Perceived usefulness, in this case, refers to the degree to which the learner believes that Internet based learning will contribute to their personal job performance, whereas the perceived ease of use indicates the degree to which using Internetbased-learning appears free of effort on their part (Saadé & Bahli, 2005). In order for software to be successful, it needs to be simple to use and the learner should have the ability to navigate the system without experiencing compatibility issues when undertaking a technology-based e-learning course or module (Cetindamar & Olafsen, 2005; Li et al, 2008). Corporate e-learning can not only help in the instructing of generic national and international curricula but provides a platform for the employee to benefit from courseware that is specific to the
272
organisation and learner. How specifically a system is designed according to employees’ needs in terms of functionality and ability can have a direct influence on whether or not there is continued use of the system (Konradt, Christophersen & Schaeffer-Kuelzb, 2006).
Environmental Dimension E-learning feedback needs to be communicated effectively within an organisation or else it could demonstrate a lack of commitment (MacVicar & Vaughan, 2004). A characteristic of Internet based learning is that it allows the learner to access assessment functionality; this gives the user a full learning cycle, from the downloading of materials, through submission of assignments to booking and completing an exam or a test - all online (Pituch & Lee, 2006). Diversity in assessment of learners/ methods can thus positively influence perceived corporate e-learning satisfaction. The social interaction present in a classroom is lost with technology-based solutions such as distance e-learning (Welsh et al., 2003). Interaction between the corporate learners and the learning practitioners is important. The ability to provide e-learning feedback to learners through different methods creates learner confidence that their submissions are being attended to. Research shows that participation within technology-based e-learning will not happen when there is poor or inconsistent information being communicated within the organisation (Welsh et al, 2003). The organisation needs to show commitment to the
Factors Influencing User Satisfaction with Internet-Based E-Learning in Corporate South Africa
function as it is not always considered and can be neglected (Arbaugh et al., 2009). Interaction with other students, the practitioners and the courseware itself can result in improved e-learning user satisfaction (Arbaugh, 2000). Introducing interaction between the learners and practitioners can give rise to ideas and techniques, encouraging the learner to participate (Gold, Malhortra & Segars, 2001).
The South African Context There is general interest and participation in further education for adults in South Africa due to the fact that South Africa has not always had equality in the availability and access to education (Wilson, 2008). The South African government has highlighted the importance technology based learning and is driving itas a strategy for developing education (Elearning, 2006). There are, however, added problems in Sub-Saharan Africa with Internet bandwidth restrictions and costs not being conducive to the speed of the applications or downloading (Wilson, 2008). Learners in corporations benefit from Intranets and web 2.0 based functionality. This includes Internet-based collaboration and searches that facilitate sharing of experiences and problems (Wang, 2007).
RESEARCH DESIGN AND METHODOLOGY Research Objective and Hypotheses The research objective is to show which factors affect user satisfaction with Internet-based elearning in South African corporations. To achieve this, the research model constructed by Sun et al. (2008) was used. It is an integrated model that is broken up into 6 dimensions namely: learners, technology, instructors, design, courses and environment. The six dimensions have been broken down into the following thirteen hypotheses.
Learner Dimension ◦⊦ H1 A learner’s positive attitude towards computers has a positive influence on perceived corporate e-learning satisfaction. ◦⊦ H2 Learner anxiety towards computers has a negative influence on perceived corporate e-learning satisfaction. ◦⊦ H3 Learner self-efficacy with the Internet has a positive influence on perceived corporate e-learning satisfaction. Instructor Dimension ◦⊦ H4 Instructor response timeliness towards the learners has a positive perceived influence on corporate elearning satisfaction. ◦⊦ H5 An instructor’s positive attitude towards e-learning has a positive influence on perceived corporate elearning satisfaction. Course Dimension ◦⊦ H6 E-Learning course flexibility has a positive influence on perceived corporate e-learning satisfaction. ◦⊦ H7 E-Learning course quality has a positive influence on perceived corporate e-learning satisfaction. Technology Dimension ◦⊦ H8 Quality of the technology has a positive influence on perceived corporate e-learning satisfaction. ◦⊦ H9 Quality of the Internet Access has a positive influence on perceived corporate e-learning satisfaction. Design Dimension ◦⊦ H10 Perceived usefulness of the corporate e-learning has a positive influence on learner satisfaction. ◦⊦ H11 Perceived ease of use of the corporate e-learning has a positive influence on learner satisfaction.
273
Factors Influencing User Satisfaction with Internet-Based E-Learning in Corporate South Africa
Figure 2. Dimensions and antecedents of perceived e-learner satisfaction (Source: Sun, Tsai, Finger & Chen, 2006)
Environmental Dimension ◦⊦ H12 Diversity in assessment methods has a positive influence on perceived corporate e-learning satisfaction. ◦⊦ H13 Learner social interaction has a positive influence on perceived corporate e-learning satisfaction. These hypotheses are summarised visually in Figure 2.
Research Methodology The driving philosophy of this study is positivist and it thus uses data which is based on observable social reality. An explanatory research approach was used to establish the fundamental relationships between the different variables and provide further insight into the factors that influence perceived learner satisfaction. The research data uses an electronic survey with a quantitative research design. The research hypotheses were used in the Sun et al. (2008) study in Taiwan. The present study thus also tested the instrument’s validity in different countries as the original study was conducted in Taiwanese
274
universities, whereas this research focused on corporations in South Africa. The analysis of the responses was conducted using Statistica, a statistical software package from Statsoft. Regression analysis was used to analyse the data. The independent variables are the thirteen variables in the hypotheses mentioned above. The perceived learner satisfaction was the dependent variable (Sun et al., 2008).
Research Strategy An online questionnaire was created to gather the data. The link to this survey was included in an email accompanied by a covering letter documenting the intent, process and confidentiality of the study as it is presented to the participants. The survey was distributed via a secure website containing the questionnaire and a disclaimer assuring them of the confidentiality of their participation. The survey data was entered and automatically saved into a MySQL database where the online questionnaire was developed using PHP Glossary. This functionality reduced the effort in gathering the information and removed logistical problems in distributing and collecting the actual survey responses. This method was selected as the
Factors Influencing User Satisfaction with Internet-Based E-Learning in Corporate South Africa
Table 1. Overview of the variables and number of test items for each hypothesis Dimension
Variables
Code
No of test items
Learner
Learner attitude towards computers
LAT
5
Learner
Learner anxiety towards computers
LAX
4
Learner
Learner self-efficacy with the internet
LSE
13
Instructor
Instructor response timeliness towards the learners
IRS
1
Instructor
Instructor attitude towards e-learning
IAT
2
Course
E-Learning course flexibility
CFL
8
Course
E-Learning course quality
CQL
3
Technology
Quality of the technology
TCQ
4
Technology
Quality of the Internet
TIQ
4
Design
Perceived usefulness
DPU
4
Design
Perceived ease of use
DPE
4
Environmental
Diversity in assessment
EDA
1
Environmental
Learner social interaction
EPI
9
Perceived Learner Satisfaction
Perceived Learner Satisfaction (dependent variable)
VDT
9
participants are exposed to the technology based learning in their organisation and are therefore familiar with this type of technology. Once the surveys were completed the information recorded was exported from the relational database directly into spreadsheets or Statistica for manipulation and analysis. South African corporations participating in this study gave permission for this study to be conducted and either submitted names for contact or forwarded the mail to the identified resources contained in the various companies’ global address lists. There was little or no interaction between the participants and the researcher. Incentives were offered in the form of rewards for participation in the study and completing the questionnaire.
Research Instrument The survey questionnaire to study the learner satisfaction has been derived from a questionnaire in a previous study wherein “the dimensions and antecedents of perceived e-Learner satisfaction”
model were developed. The model is best suited to establish what factors affected learner satisfaction with Internet based e-learning and the questionnaire appears to be best suited to extract the information required for this study (Sun et al., 2008). The first section containing the demographics has been added to this questionnaire to gather information about the participating organisations and to address the sub-hypotheses. The questionnaire is structured into two main parts; the first part containing the demographic information and the second section containing the dependent and independent variable questions containing fourteen subsections (Table 1).
Data for Constructs The data was gathered from a sample drawn from South African corporations. The seven point Likert scale was used as the rating scale in the questionnaire; the scale ranges from “strongly disagree” to “strongly agree”. The Likert scale is a psycho-
275
Factors Influencing User Satisfaction with Internet-Based E-Learning in Corporate South Africa
metric scale used for survey questionnaires where the participants answer the questions with a level of agreement to each statement (Trochim, 2006a).
Sampling Method The participants in the survey in this research are located in offices all over Southern Africa. The survey was developed for the web in PHP (Hypertext pre-processor), with a MySQL an open-source relational database as the backend. The survey was embedded in an email with an introduction. The Internet-based survey was developed and hosted by Eiledon Solutions cc (www.eiledon.co.za), a South African web development and hosting company. The Internet survey was developed as a full content management system so that changes could be made to it when and where required. The site also contained a log that could be accessed by the administrators of the survey to view its completion progress. The site was built with user security and could only be accessed by those with a valid username and password. The data was collected in two phases; an initial mail sent to the participants via the selected coordinator in the organisation and an additional mail sent to follow-up in order to motivate more participation. Input was higher just after the mail was sent than at any other stage. Incentives were offered for participation in the survey in the form of a three prizes. The first prize was a website sponsored by Eiledon Solutions cc donated to a charity or a small company of the winner’s choice, the second prize was an Apple iPod and the third prize a DVD player. There was a request to 500 participants to complete the survey with a response received from 152 participants, showing a 30% response rate. Of the 152 responses 32 were invalid due to incomplete or poor data entered into the demographic section of the questionnaire.
276
Privacy, Confidentiality and Ethics Physical access to the participants was not a requirement of this survey and the research findings will be made available to the participants if required. The participants were under no obligation to participate in this study and the fact that it was non-compulsory will be documented. Information received was also to be kept confidential and only utilised in the submission of this study. The approval for Internet based learning user participation was obtained from the learning sponsors, i.e. representatives from the learning organisations or the relevant management responsible for permission to distribute the survey to their respective employees. The background to the survey and the study was explained in the covering letter attached to the email distributing the link for the survey to the learners. The email containing the link to the Internet survey was sent to the sponsors for approval and also obtained ethics approval from the University of Cape Town (UCT).
Target Population This research targets learners (management and non-management) and learning practitioners that have used or are using Internet based learning systems in South African corporations. The organisations selected are from different industries within South Africa, including financial, retail and commercial institutions. Additionally, representatives were selected from learning institutions that provide products and services to South African corporations. The companies asked to participate have a branches located all over Southern Africa. The learning solutions providers asked to participate also conduct training in/for various institutions nationally. The participants were selected for their size, maturity, willingness to participate and the fact that their employees have been exposed
Factors Influencing User Satisfaction with Internet-Based E-Learning in Corporate South Africa
to Internet based training. Participation in the survey has been agreed with those that would be responsible and accountable for the distribution within their relative organisations. Certain organisations were reluctant to distribute the survey due to time constraints, workloads and being subjected to similar surveys.
Limitations The study uses the “Dimensions and antecedents of perceived e-Learner satisfaction” model (Sun et al., 2008) as a framework. As an extension to the model, the framework also included personal factors as an additional contextual factor to the model. Further research could examine other factors more relevant to the South Africa, such as legal policies and government regulations relating to technology-based e-learning. The types of learning content and materials produced by technology-based e-learning are not discussed in detail. The research will be cross-sectional at a given point in time and this would not disclose the cause and effect of the factors influencing the learning over a defined period of time. Results from this research may not be a true reflection of the learner experience of corporate South Africa due to a limited response from some organisations. The industries examined are not a wide enough spread and the learners examined have been very proficient with information technology. The bulk of the respondents are also from one organisation. This limitation was realised due to organisations opting not participate for reasons such as workload and similar initiatives already being run in the organisation. A longitudinal supplementary research could be done where the cause and effect can be established. The research should be administered across different organisations with as large a sample of participants as possible which would work to reduce bias from the participants though not it might not eliminate it entirely.
DATA ANALYSIS AND FINDINGS The data analysis section discusses the results of statistical analysis, Statistica 8 was used to analyse the data. This section discusses how the raw data was collected and prepared before being entered into the statistical package. Following this, the target population, statistics and outcomes are discussed. The next section examines the reliability and the validity of the data collected and the testing used to verify the input. The relationships of between the 13 independent variables and the dependent variable were then tested to establish support for the theory. The outcomes and findings of the statistical tests are collated, discussed and finally compared to the study of Sun et al. (2008). Pie charts and tables are compiled and produced from Statistica 8 and Microsoft Excel 2007.
Learner and Organisation Demographic Information The survey was sent to South African corporations and to organisations that provide learning Internetbased learning products and services to South African corporations. The participants needed to have experience with Internet based learning. A total of 30 different organisations were targeted and invited to participate in the Internet based learning electronic survey. Responses were collected from only 23 of these organisations even though all 30 had agreed to participate in the study. Each of the participating organisations had a contact person who was sent an email containing the electronic survey and a covering letter, which they distributed to the necessary audience. The survey was sent to 500 participants, 152 responded, 2 were disregarded as they had not entered the demographic data correctly and a further 30 were eliminated as the surveys were incomplete. The contact people at the learning-providing organisations chose employees from their customer-base. The survey participants were corporate employees, non-management, management and learning prac-
277
Factors Influencing User Satisfaction with Internet-Based E-Learning in Corporate South Africa
Figure 3. Demographic results
titioners providing corporate learning in Internet based learning (Figure 3). The responses are predominately from 1 large corporation, which accounted for 47% of the responses. The remaining 53% of respondents came from 22 different organisations. The genders of the respondents were quite evenly spread, male 52.5% and female 47.5%. Half of the respondents came from the 31-40 year age group; and roughly one quarter of the respondents fell into the age groups just above and below these ages, with only 3 respondents aged above 50 years. The experience of the respondents with Internet based learning was quite divided, with 45% having more than 4 years experience and 37% 1 year or less. Only 18% had between 2 and 4 years of Internet-based learning experience. Respondents had good computer competency levels with 47% rating themselves at the intermediate level and 51% as experts. An important factor is the English language competency. The majority (71%) were English first-language speakers and a further 28% had English as their second language. On 1% identified English as a barrier.
278
Validity and Reliability Analysis The survey was initially sent to learning practitioners and professionals to obtain feedback on the survey. They were asked to confirm the validity and the relevance of each question. This process resulted in several iterations of the survey until the quality assurers were happy that the content of the survey was (acceptable) and that the audience would be comfortable with the questions. The reliability of the test items in the questionnaire was tested using the Cronbach alpha statistic. Where the alpha coefficient was too low, test items were removed until the reliability of the remaining items was sufficiently high. The removal of questions was also guided by the factor analysis. Factor analysis was used in this study to test the validity of the instrument to measure the various independent constructs within the six dimensions as well as the dependent variable, the perceived learner satisfaction. The validity of a construct can be determined by the items (questions) for the construct which can be loaded against (what can be assumed to be) their own factor. Although loadings greater than 0.7 are desirable, loadings of 0.6 or more were deemed acceptable due to the
Factors Influencing User Satisfaction with Internet-Based E-Learning in Corporate South Africa
Table 2. Item reliability and construct validity Variable (Code)
Number of original test items
Question numbers removed
Number of remaining test items
Reliability (Cronbach α)
Validity (factor analysis)
LAT
5
1,2, 4
2
.52
Ok
LAX
4
1
3
.91
High
LSE
5
-
5
.90
High
IRS
1
-
1
NA
Ok
IAT
1
-
1
NA
Ok
CFL
5
2
4
.63
Low
CQL
3
1
2
.81
High
TCQ
4
-
4
.83
High
TIQ
4
3,4
3
.26
Very low
DPU
4
-
4
.89
High
DPE
4
-
4
.82
Ok
EDA
1
-
1
NA
Ok
EPI
5
1,5
3
.76
Ok
VDT
9
1
8
.80
Ok
relatively large number of questions in relation to the relatively small sample (n = 120). The factor analysis used a Varimax normalisation. With a critical cut-off Eigenvalue set at 1.0, 17 factors were extracted, which meant that some of the constructs consisted of more than one factor. The salient results of the factor analysis are discussed below. Table 2 summarizes the reliability and validity analysis. Learner attitude towards computers (LAT) displayqed a very low reliability. Even after removing three of the five variables, the Cronbach alpha increased to a very low 0.52. However, it is important to note is that the factor analysis loaded a factor for the remaining two questions supporting the decision to remove the other three questions from the analysis. Learner anxiety towards computers (LAX) had a high reliability after removing the first test item and the remaining questions loaded onto a single factor. Learner self-efficacy with the internet (LSE) was reliably tested with all five
questions (α = 0.902) and all questions loaded onto the same factor. The instructor dimension consisted of two variables: Instructor response timeliness towards the learners (IRS) and Instructor attitude towards e-learning (IAT) which were both tested with a single test item. The course dimension contained E-Learning course flexibility (CFL) which displayed a fairly reliable Cronbach alpha of 0.63 after removing one question. However, the factor analysis loaded a factor for questions three and four but not questions 1 and 5. This does limit the applicability of this factor and this will be noted. E-Learning course quality (CQL) consisted of 3 questions. After the first question was removed, the Cronbach alpha increased to 0.81 and the factor analysis loaded a factor for the remaining two questions. The technology dimension included quality of the technology (TCQ) which had a high Cronbach alpha. The factor analysis loaded a factor for the first three of these questions. The fourth question had the highest loading of 0.698 in the same column
279
Factors Influencing User Satisfaction with Internet-Based E-Learning in Corporate South Africa
Figure 4. Correlation and reliabilities descriptive statistics = study variables (n = 120)
as the above three questions so it is reasonable to include it in the factor. However, Quality of the Internet (TIQ) failed to yield a sufficiently high reliability. Even after removing two questions, the Cronbach alpha was only 0.26. The factor analysis failed to load a factor for the first two questions. This, together with the low Cronbach alpha, reduces the reliability and validity of this factor. The analysis will still be completed using this factor but the limitations of this result will be noted. The design dimension examined two hypotheses: Perceived usefulness (DPU) and Perceived ease of use (DPE), whichboth displayed a high reliability (>.82). The DPU test items loaded cleanly and strongly onto a single factor. All of the DPE items loaded most strongly onto a single factor, although for two of these questions the loading was below 0.50 (namely 0.46 and 0.41 respectively). These items and constructs have been validated many times in previous research so their robustness is unsurprising. Learner social interaction (EPI) had their first and the fifth questions were removed (Cronbach alpha = 0.76) and the factor analysis extracted a factor for the second, third and fourth questions, thus supporting the removal of the first and fifth questions. Perceived Learner Satisfaction (VDT), the dependent variable, consisted of 9 questions. The
280
eighth question was removed giving a Cronbach alpha of 0.80. The factor analysis loaded the grouped questions 1, 3, 4 and 5 in one factor and questions 2 (0.56) and 6 (0.59) can be argued to be included in this factor on the basis of the high loading in the same column. Although questions 7 and 9 loaded as a separate factor, they have been included in this variable but it is important to note the validity issue. However, this can possibly be attributed the fact that these two questions were negatively phrased and were also the last two questions in the questionnaire (Figure 4). The Cronbach alphas (Reliabilities) were inserted into (). n/a = not applicable
Influence of Demographic Variables on Perceived Learner Satisfaction Anova statistical testing was run using the demographic data against the dependent variable, since most demographic variables are categorical, not continuous. Anova compares the dispersion of the demographic variables to the dependent variable of the perceived learner satisfaction to explore if there are any variances which are significant. This was to test the sub-hypotheses listed above. The organisation, gender, the computer skill level, the English language level, the role/position, age and the Internet based learning experience of the
Factors Influencing User Satisfaction with Internet-Based E-Learning in Corporate South Africa
Table 3. Codes used for each hypothesis Demographic Variable Organisation
F-ratio
p-value
0.16995
0.6809
Gender
0.6011
0.4397
Job role/position
1.9291
0.1499
Age
0.5771
0.5631
Computer skill
0.4081
0.5242
English
0.6011
0.4397
Internet-based learning experience
0.4589
0.6331
respondents showed no significance in the Anova analysis testing (Table 3). Perhaps surprisingly, none of the demographic variables appear to have a statistically significant effect on the perceived learner satisfaction:
Findings for the Specific Hypotheses The correlations between the dependent variable and the independent variables were tested using Spearman’s rank correlation testing; the/a Spearman rank correlation coefficient does not make any assumptions about normality and this is considered valid given the size of the sample and the results of the Cronbach alpha. Spearman rank correlation analysis is a non-parametric rank statistic meaning that it is distribution free and is used to measure the strength of the relationship between two variables and, in this case, between the 13 independent variables and the dependent variable of the perceived learners’ satisfaction. For each set of hypotheses, there is a Spearman correlation test. Information is extracted where Spearman correlation efficient (R) and the plevel show that there are significant relationships between them. The sample used in this research was 120 respondents and this is considered as sufficient for this test. Table 4 lists the Spearman Rank Order Correlations (Modified Variables) and highlights correlations are significant at p <.05. The following hypotheses appear to be supported by the data analysis.
Table 4. Spearman rank order correlations (modified variables)
LAT
VDT
Spearman (R)
t
P
-0.084745
-0.923889
0.357430
LAX
VDT
-0.037508
-0.407733
0.684209
LSE
VDT
0.071283
0.776308
0.439120
IRS
VDT
0.158512
1.743934
0.083775
IAT
VDT
0.218775
2.435503
0.016367*
CFL
VDT
0.297320
3.382689
0.000974***
CQL
VDT
0.070616
0.769010
0.443424
TCQ
VDT
0.111113
1.214511
0.226977
TIQ
VDT
0.070055
0.762862
0.447068
DPU
VDT
0.394664
4.665905
0.000008***
DPE
VDT
0.311126
3.556198
0.000543***
EDA
VDT
0.075906
0.826935
0.409943
EPI
VDT
0.244084
2.734129
0.007219**
Significance-level: * = 5%-level, ** = 1%-level; *** = 0.1%-level
Learner Dimension There are no statistically significant relationships between perceived learner satisfaction with learner attitude, learner anxiety or self efficacy since the p values are all above 0.05.
Instructor Dimension The null-hypothesis of there being no statistically significant relationship between perceived learner satisfaction and instructor attitude must be rejected at a 5% confidence level and we can therefore support the hypothesis that there is a positive relationship between instructor attitude and perceived learner satisfaction.
Course Dimension There is a statistically very significant relationship between perceived learner satisfaction and course flexibility.
281
Factors Influencing User Satisfaction with Internet-Based E-Learning in Corporate South Africa
Technology Dimension There is no relationship between perceived learner satisfaction and quality of the technology. or the quality of the internet.
Design Dimension There are very significant relationships between perceived learner satisfaction with both the perceived usefulness and perceived ease of use of Internet based learning as the correlation coefficients are above 39% and 31% respectively.
Environment Dimension Learner social interaction appears to have a strong positive influence on perceived corporate e-learning satisfaction.
RESEARCH FINDINGS The previous study conducted in Taiwan (Sun et al., 2008), found evidence to support seven of the hypotheses whereas this research supported five. There were five independent variables with p-values less than.05, supporting the theory that they have a critical relationship with the dependent variable. The five factors were the following; the attitude of the instructor towards e-learning, flexibility of the course, the perceived usefulness, the perceived ease of use and the learner social interaction.
Learner Dimension The previous study (Sun et al., 2008) showed evidence a decreased of perceived learner satisfaction where the students had anxiety towards computers; the research did not show enough evidence supporting the any of the learner dimension hypotheses put forward. The South African corporations which participated in the electronic
282
survey for this research predominantly came from the information technology sector or, as can be seen from the computer efficiency question in the demographic question, the respondents had an almost ninety-seven percent (97%) intermediate or expert skill level with computers. There was no significant evidence to show that there was any influence on the perceived learners’ satisfaction by attitude towards computers, anxiety towards computers or their internet efficiency. Because of the homogeneous nature of the respondents, the sub-hypotheses were also rejected. The nature of the organisation, the role/position, the age groupings, the computer skills, the English language level and the extent of internet-based learning experience all had insufficient impact on learner satisfaction to be significant.
Instructor Dimension The research does not show that there is sufficient evidence to support the findings of Hammer and Champy (2001), Kenworthy and Wong (2005); Meier (2007), and O’Neil, Singh and O’Donoghue (2004) that the instructors’ attitudes toward eLearning has a significant effect on the perceived learners’ satisfaction. The second part of the findings around the instructor dimension corroborates with those of Hamid (2002) and Falowo (2007), that is, that the attitude of the instructor towards Internet based learning is significant to the perceived satisfaction of the learner. The survey was conducted with a mature audience with seventy-nine percent (79%) being over the age of thirty (30). In the corporate environment, the learners can generally only participate in learning initiatives on a part time basis, due to the their jobs being full time, so it’s important that the instructor responds to their needs quickly and that the instructor has a good attitude towards Internet based learning in general and that the instructor meets the challenges presented in a South African context with regards to elements such as the learning material.
Factors Influencing User Satisfaction with Internet-Based E-Learning in Corporate South Africa
Course Dimension There was evidence to support that course flexibility influences the corporate learner’s perceived satisfaction towards Internet based learning but there was insufficient evidence to corroborate that the quality of the course influenced the perceived learners’ satisfaction. Supporting Macpherson et al. (2004); Pituch and Lee (2006); Rabak and Cleveland-Innes (2006) and Roffe (2002), the ability for the learner to repeat learning, enter the course at their own discretion and manage the course themselves, as well as, the turn-around speed, accessibility to learning environments without having to worry about equipment/venue, significantly influences the perceived satisfaction of the learner. The corporate learner, due to work pressures and time constraints, benefits from a flexible, accessible environment such as Internet based learning to complete courses or continue their adult education. The South African corporation should investigate further exploitation of this finding, especially if engaged in more traditional approaches to learning, such traditional classroom learning, within their respective organisations. There was no evidence to support the hypothesis quality of the course had an influence on the perceived learner satisfaction. The quality of the learning material and distribution could differ from one organisation to the next, and in some cases could be still a new initiative, without the solid backing or investment experienced by other functions within each corporation, without impacting on perceived learner satisfaction.
Technology Dimension Neither of the factors explored under the technology dimension showed any influence on the perceived learner satisfaction towards Internet based learning. The technology available to various learners within the South African corporations could differ as a result of different types of net-
works used and levels of bandwidth available in each of the organisations, but the learners could also choose to further their education by connecting from home and could be challenged by South African bandwidth costs and restrictions (Cronje, 2008; Wang, 2007; Wilson, 2008). The fact that there is not enough evidence to support either of the technology dimension factors does not necessarily mean that there is something (or nothing) wrong with the technology available to the learners. The respondents did not perceive that technology or the internet would have a positive influence on their e-learning satisfaction. It must be noted that the factor, quality of the internet, had a Cronbach alpha of 0.26 and did not load as a factorwhich could have influenced this finding.
Design Dimension This dimension showed evidence that both the perceived ease of use and the perceived usefulness of Internet based learning influences the perceived learner satisfaction. As per Cetindamar and Olafsen (2005) and Saade´ and Bahli (2005), Internet based learning provides a mechanism which enables those who work full-time in corporations to further their education with little effort. This research supports the view that perceived ease of use does have a significant influence on the perceived learner satisfaction. The findings also support those of Konradt et al. (2006), and Saade´ and Bahli (2005), that Internet based education provides courseware which, being developed specifically for the learner, can be easily changed according to the differing requirements. The course material can potentially contribute to the learners’ job performance. This supports the hypothesis that the perceived usefulness of Internet based learning experienced in South African corporations influences the perceived learner satisfaction. TAM is a framework that predicts user acceptance of a technology (Pituch & Lee, 2006). It concentrates on the two key sets of user beliefs,
283
Factors Influencing User Satisfaction with Internet-Based E-Learning in Corporate South Africa
that is, the perceived ease of use and the perceived usefulness of a system, that influence the adoption and further use of a new technology (Davis et al., 1989). The relevant technology in this research is internet based learning. This research showed evidence that both perceived usefulness and perceived ease of use resulted in user satisfaction. Continued use implies satisfaction (or a lack of choice) and it is thus contended that the two key factors of the TAM framework are supported with regard to Internet based learning in the Southern African corporate environment.
Environmental Dimension The environmental dimension explored two different factors, firstly assessment diversity and secondly social interaction of the learner. There was not enough evidence to support assessment diversity as a relevant factor. Unlike in higher learning organisations, corporate learners are less influenced by diversity, as the courseware is often in a single format and developed around their specific needs. The social interaction experienced by students at higher learning institutions in a traditional classroom environment will be stronger than it will be using e-learning. The corporate learner is isolated from these types of interactions due to the nature of their job and work environment. There is evidence to support Arbaugh (2000) and Gold, Malhortra and Segars (2001) that interaction of the learners with the instructors and other students has an influence on the perceived learner satisfaction.
COMPARISON WITH PREVIOUS TAIWANESE STUDY This section compares and describes the findings to those of the previous study (Sun et al., 2008) as it was the primary source for this study, which utilised a variation of that study’s instrument. The comparison will first look at the difference in the demographic data and thereafter compares 284
Figure 5. Comparison of demographic data
the data used to examine the hypotheses. The Sun et al. (2008) research was conducted at two universities in Taiwan; the students had completed an e-learning course. The sample of Sun et al., (2008) obtained 295 respondents whereas this research had 120 respondents. The nature of the respondents differed as this research sample was from the South African corporate environment. This sample was more generalised, as it did not focus on a particular course but on those participants that had experienced Internet based learning at some stage. The length of the of the instrument also differed as, due to work schedules and timing constraints, the questionnaire was reduced from seventy-four questions to fifty-five questions. Figure 5 shows the data gathered from each study: The reliability analysis conducted by Sun et.al. (2008) showed that that the item reliability was high for all the items where the Cronbach alpha could be measured, all above 0.7. This research however did not achieve the same level of reliability but, with the exception of three factors, a measure of over 0.7 was obtained. Of the three factors below 0.7, course flexibility was above 0.6and learner attitude towards computers was above 0.5. The technology dimension internet quality could only use two of the questions, resulting in a Cronbach alpha of 0.26. The need to remove questions in order to increase the Cronbach alpha scores may have been due to people answering too quickly as nearly 75% answered the questions
Factors Influencing User Satisfaction with Internet-Based E-Learning in Corporate South Africa
Table 5. Comparison of results to study of Sun et al. (2008) Dimension
Variables
This research
Sun et al. 2008
Learner
Learner attitude towards computers
Not supported
Not supported
Learner
Learner anxiety towards computers
Not supported
Supported
Learner
Learner self-efficacy with the internet
Not supported
Not supported
Instructor
Instructor response timeliness towards the learners
Not supported
Not supported
Instructor
Instructor attitude towards e-learning
Supported
Supported
Course
E-Learning course flexibility
Supported
Supported
Course
E-Learning course quality
Not supported
Supported
Technology
Quality of the technology
Not supported
Not supported
Technology
Quality of the Internet
Not supported
Not supported
Design
Perceived usefulness
Supported
Supported
Design
Perceived ease of use
Supported
Supported
Environmental
Diversity in assessment
Not supported
Supported
Environmental
Learner social interaction
Supported
Not supported
in under 10 minutes, including the time to load each page. In addition, the use of reverse code questions may have impacted on these scores. A summary of the findings is presented in Table 5.
Learner Dimension The previous study (Sun et al., 2008) showed evidence that the students’ anxiety towards computers influenced percieved learner satisfaction but this research did not show evidence to support the hypothesis that the two factors are linked The South African corporations which participated in the electronic survey for this research came predominantly from an information technology sector. In addition, the respondents had an almost ninety-seven percent (97%) intermediate or expert skill level with computers. This level of familiarity and expertise would have an impact on the way the respondents viewed anxiety. Neither study found support for the influence of learner attitude or learner self-efficacy.
Instructor Dimension Both studies found support for the influence of instructor attitude on learner satisfaction and neither found support for the influence of instructor response timeliness on learner satisfaction.
Course Dimension Both studies found support for the influence of course flexibility. This study did not find support for the influence of course quality. In the previous research (Sun et al., 2008), the study was focused on a particular course set in the Taiwanese universities, whereas this study explored the learner’s participation with Internet based learning in general. This could have influenced the perceptions of the impact of course quality.
Technology Dimension Neither study found any support for the influence of either of the two technology dimensions.
285
Factors Influencing User Satisfaction with Internet-Based E-Learning in Corporate South Africa
Design Dimension Both studies found support for the influence of both of the design dimensions.
Environmental Dimension Interestingly, the two factors under this dimension produced different results in the two studies. The prior study found support for the influence of diversity in assessment. That study was conducted in universities with more traditional teaching methods so the diversity of assessment was likely to be more appealing to the students. There is less opportunity for diversity in the corporate environment and this seems to have resulted in the lack of support for this factor in this study. The prior study did not find support for the influence of learner social interaction whilethis study did show support for this factor. The social interaction gained on a Internet based learning course would give corporate learners a alternative platform for collaboration and sharing ideas between individuals in different divisions or organisations within Southern Africa. This would influence perceived learner satisfaction.
LIMITATIONS AND FUTURE RESEARCH The research evaluated what factors affect the perceived learner satisfaction in the Southern African corporate environment by adapting a previously used instrument (Sun et al., 2008). The instrument was tested with respondents from twenty-three different South African organisations across the country. However, 47% of the respondents came from a single organisation. In addition, 51% of the respondents rated themselves as experts and 47% as intermediate. The research would have been enhanced by increasing the size of the sample and by getting a more diverse range of respondents. The research would have benefited particularily if
286
the respondents were drawn from different sectors that are not so technology focussed. In addition, the respondents could have had a wider range of experience and background. The instrument could be modified to not included the questions that needed recoding due to being negatively phrased.
CONCLUSION Employees in organisations in the Southern African corporate environment have a need to continue and further their education. Internet based learning is becoming an increasingly important and strategic mechanism for the education of the organisational workforce. It serves as an alternative to traditional classroom based teaching methods. The ability to transfer knowledge, instruct and assess information by means of a versatile, flexible, cost effective and fast platform is needed in the corporate environment. This research was conducted to understand and explain what factors would influence the perceived learner satisfaction with Internet based learning, by examining six dimensions with thirteen factors. The research was conducted with large South African corporations and learning providers supplying learning products and services to corporate South Africa. The findings were compared with those of a similar study (Sun et al., 2008). An electronic survey was sent to employees, management and learning practitioners from twenty-three different organisations within Southern Africa. There were 120 usable responses. The data was analysed using the Spearman Rank-Order Correlation Coefficient. The research showed that there are significant relationships for the following five factors: Instructor attitude towards e-learning, e-learning course flexibility, perceived usefulness, perceived ease of use and learner social interaction. The instructor’s attitude towards the technology was found to be important. This shows that South African corporations and providers of Internet based learning products and services should
Factors Influencing User Satisfaction with Internet-Based E-Learning in Corporate South Africa
select instructors who embrace and promote the use of these types of systems and software. There was support for course flexibility being relevant. This is not surprising, given the pressures of the corporate environment. South African corporations could examine the extent to which they allow the learner to repeat learning, be able to take courses at their own discretion and manage their own courses. The flexibility offered by Internet based learning is of high importance, especially to those organisations that are still engaged in more traditional type approaches to learning. The course design was also found to be important for the corporate learners. This means that the perceived usefulness and ease of use in accessing and using Internet based learning courses positively impacts the perceived learner satisfaction. There was support for the significance of learner social interaction as a variable. Due to the environment that the learner works in and is exposed to in a South African corporation, the learner would not have the same social interaction as that of a student. However, Internet based learning allows the learner to interact with learners and employees of other South African corporations. The research provides insights for South African corporations and institutions supplying learning products on where to focus their efforts and how to enhance their Internet based learning initiatives. By doing this, these companies will further enhance the learning experience and perceived learner satisfaction. Four of the five factors that were found to influence learner satisfaction were also supported by the prior research undertaken by Sun et al. (2008). This adds more support to those four factors (Instructor attitude, course flexibility, perceived usefulness and perceived ease of use). The five factors identified in this study could be considered by organisations in the corporate environment when implementing Internet based learning so as to yield more successful results. Finally, it is interesting to note that the two key independent variables hypothesized in the
“classical” technology adoption model (TAM) are also here shown to exert the strongest influence on perceived learner satisfaction, though they do not present the full picture.
REFERENCES Arbaugh, J. B. (2000). Virtual classroom characteristics and student satisfaction with Internet-based MBA courses. Journal of Management Education, 24(1), 32–54. doi:10.1177/105256290002400104 Arbaugh, J. B. (2000). Virtual classroom characteristics and student satisfaction with Internet-based MBA courses. Journal of Management Education, 24(1), 32–54. doi:10.1177/105256290002400104 Arbaugh, J. B., Godfrey, M. R., Johnson, M., Pollack, B. L., Niendorf, B., & Wresch, W. (2009). Research in online and blended learning in the business disciplines: Key findings and possible future directions. The Internet and Higher Education, 12, 71–87. doi:10.1016/j.iheduc.2009.06.006 Ardichvili, A., Maurer, M., Li, W., Wentling, T., & Stuedemann, R. (2006). Cultural influences on knowledge sharing through online communities of practice. Journal of Knowledge Management, 10(1), 94–107. doi:10.1108/13673270610650139 Brown, K. (2001). Using computers to deliver training: Which employees learn and why? Personnel Psychology, 54, 271–296. doi:10.1111/j.1744-6570.2001.tb00093.x Cetindamar, D., & Olafsen, R. N. (2005). Elearning in a competitive firm setting. Innovations in Education and Teaching International, 42(4), 325–335. doi:10.1080/14703290500062581 Cronje, J. C. (2006). Pretoria to Khartoum - how we taught an Internet-supported Masters’ programme across national, religious, cultural and linguistic barriers. Journal of Educational Technology & Society, 9(1), 276–288.
287
Factors Influencing User Satisfaction with Internet-Based E-Learning in Corporate South Africa
Dagada, R., & Jakovljevic, M. (2004). Where have all the trainers gone? E-learning strategies and tools in the corporate training environment. South African Institute of Computer Scientists and Information Technologists on IT Research in Developing Countries. ACM International Conference Proceeding Series, (pp. 194-203). Stellenbosch, Western Cape, South Africa. Davis, F. D., Bagozzi, R. P., & Warshaw, P. R. (1989). User acceptance of computer technology: A comparison of two theoretical models. Management Science, 982–1003. doi:10.1287/ mnsc.35.8.982 DeRouin, R. E., Fritzsche, B. A., & Salas, E. (2004). Optimizing e-learning: Research-based guidelines for learner-controlled training. Human Resource Management, 43(2-3), 147–162. doi:10.1002/hrm.20012 DeRouin, R. E., Fritzsche, B. A., & Salas, E. (2005). E-learning in organizations. Journal of Management, 31(6), 920–940. doi:10.1177/0149206305279815 Elearning Africa. (2006). Elearning Africa report. Retrieved August 29, 2009, from http:// www.elearning-africa.com/ pdf/ report/ postreport_eLA2006.pdf Engelbrecht, E. (2003). E-learning – from hype to reality. Progressio, 25(1). Falowo, R. O. (2007). Factors impeding implementation of Web-based distance learning. AACE Journal, 15(3), 315–338. Fishbein, M., & Ajzen, I. (1975). Belief, attitude, intention and behavior: An introduction to theory and research. Reading, MA: Addison-Wesley. Forster, D. A., Dawson, V. M., & Reid, D. (2005). Measuring preparedness to teach with ICT. Australasian Journal of Educational Technology, 21(1), 1–18.
288
Fry, K. (2001). E-learning markets and providers: Some issues and prospects. Education + Training, 233-239. Gold, A. H., Malhorta, A., & Segars, A. H. (2001). Knowledge management: An organizational capabilities perspective. Journal of Management Information Systems, 18(1), 185–214. Hamid, A. A. (2002). E-learning: Is it the ‘e’ or the ‘learning’ that matters? The Internet and Higher Education, 4, 311–316. doi:10.1016/ S1096-7516(01)00072-0 Hammer, M., & Champy, J. (2001). Reengineering the corporation: A manifesto for business revolution. London, UK: Nicholas Brealey Publishing. Hodges, C. B. (2004). Designing to motivate: Motivational techniques to incorporate in e-learning experiences. The Journal of Interactive Online Learning, 2(3), 311–316. Hoffman, D. W. (2002). Internet-based distance learning in higher education. Tech Directions, 62(1), 28–32. Kenworthy, J., & Wong, A. (2005). Developing managerial effectiveness: Assessing and comparing the impact of development programmes using a management simulation or a management game. Developments in Business Simulations and Experiential Learning, 32. Konradt, U., Christophersen, T., & SchaefferKuelzb, U. (2006). Predicting user satisfaction, strain and system usage of employee selfservices. International Journal of Human-Computer Studies, 64(11), 1141–1153. doi:10.1016/j. ijhcs.2006.07.001 Lau, R. (2000). Issues and outlook of e-learning. Business Review (Federal Reserve Bank of Philadelphia), 31, 1–6.
Factors Influencing User Satisfaction with Internet-Based E-Learning in Corporate South Africa
Li, Q. R., Lau, R. W. H., Shih, T. K., & Li, F. W. B. (2008). Technology supports for distributed and collaborative learning over the Internet. ACM Transactions on Internet Technology, 24.
O’Neil, K., Singh, G., & O’Donoghue, J. (2004). Implementing e-learning programmes for higher education: A review of the literature. Journal of Information Technology Education, 3, 313–323.
Lui, Y., & Wang, H. (2009). A comparative study on e-learning technologies and products: From the East to the West. Systems Research and Behavioral Science, 26, 191–209. doi:10.1002/sres.959
O’Sullivan, P. B. (2000). Communication technologies in an educational environment: Lessons from a historical perspective. In Cole, R. A. (Ed.), Issues in Web-based pedagogy: A critical primer (pp. 49–64). Westport, CT: Greenwood press.
Macpherson, A., Eliot, M., Harris, I., & Homan, G. (2004). E-learning: Reflections and evaluation of corporate programmes. Human Resource Development International, 295–313. doi:10.10 80/13678860310001630638 MacVicar, A., & Vaughan, K. (2004). Employees’ pre-implementation attitudes and perceptions to e-learning: A banking case study analysis. Journal of European Industrial Training, 28(5), 400–413. doi:10.1108/03090590410533080 Marold, K. A., Larsen, G., & Moreno, A. (2000). Web-based learning: Is it working? A comparison of student performance and achievement in Webbased courses and their in-classroom counterparts. Proceedings of the 2000 Information Resources Management Association International Conference on challenges of Information Yechnology management in the 21st century, (pp. 350–353). Anchorage, Alaska, United States. Meier, C. (2007). Enhancing intercultural understanding using e-learning strategies. South African Journal of Education, 27, 655–671. Muilenburg, L. Y., & Berge, Z. L. (2005). Student barriers to online learning: A factor analytic study. Distance Education, 26(1), 29–48. doi:10.1080/01587910500081269 Nixon, J. C., & Helms, M. M. (1997). Developing the virtual classroom: A business school example. Education + Training, 39(9), 349-353.
Ong, C. H., & Lai, J. Y. (2006). Gender differences in perceptions and relationships among dominants of e-learning acceptance. Computers in Human Behavior, 22(5), 816–829. doi:10.1016/j. chb.2004.03.006 Piccoli, G., Ahmad, R., & Ives, B. (2001). Webbased virtual learning environments: A research framework and a preliminary assessment of effectiveness in basic IT skills training. Management Information Systems Quarterly, 25(1), 401–426. doi:10.2307/3250989 Pituch, K. A., & Lee, Y. K. (2006). The influence of system characteristics on e-learning use. Computers & Education, 47, 222–244. doi:10.1016/j. compedu.2004.10.007 Rabak, L., & Cleveland-Innes, M. (2006). Acceptance and resistance to corporate e-learning: A case from the retail sector. Journal of Distance Education, 115–134. Roffe, I. (2002). E-learning: Engagement, enhancement and execution. Quality Assurance in Education, 10(1), 40–50. doi:10.1108/09684880210416102 Saade´, R., & Bahli, B. (2005). Valuing the impact of cognitive absorption on perceived usefulness and perceived ease of use in on-line learning: An extension of the technology acceptance model. Information & Management, 42, 317–327. doi:10.1016/j.im.2003.12.013
289
Factors Influencing User Satisfaction with Internet-Based E-Learning in Corporate South Africa
Stewart, C. L., & Waight, B. L. (2005). Valuing the adult learner in e-learning: Part one – a conceptual model for corporate settings. Journal of Workplace Learning, 337–345.
Welsh, E. T., Wanberg, C. R., Brown, K. G., & Simmering, M. J. (2003). E-learning: Emerging uses, empirical results and future directions. International Journal of Training, 245-258.
Strother, J. (2002). An assessment of the effectiveness of e-learning in corporate training programs. International Review of Research in Open and Distance Learning, 3(1).
Westerman, J. W., & Yamamura, J. H. (2007). Generational preferences for work environment fit: Effects on employee outcomes. Career Development International, 12(2), 150–161. doi:10.1108/13620430710733631
Sun, P., Tsai, R. J., Finger, G., & Chen, Y. (2008). What drives a successful e-learning? An empirical investigation of the critical factors influencing learner satisfaction. Computers & Education, 50, 1183–1202. doi:10.1016/j.compedu.2006.11.007 Trochim, W. M. (2006). Likert scaling. Retrieved May 14, 2009, from http://www.socialresearchmethods.net/ kb/ scallik.php Trochim, W. M. (2006). Deduction & induction. Retrieved June 25, 2009, from http://www.socialresearchmethods.net/ kb/ dedind.php Van der Spuy, M., & Wöcke, A. (2006). The effectiveness of technology based (interactive) distance learning methods in a large South African financial. South African Journal of Business Management, 34(2). Wang, Y. (2003). Assessment of learner satisfaction with asynchronous electronic learning systems. Information & Management, 41, 75–86. doi:10.1016/S0378-7206(03)00028-4 Wang, Y., Wang, H., & Shee, D. Y. (2007). Measuring e-learning systems success in an organizational context: Scale development and validation. Computers in Human Behavior, 23, 1792–1808. doi:10.1016/j.chb.2005.10.006
290
Wilson, T. (2008). New ways of mediating learning: Investigating the implications of adopting open educational resources for tertiary education at an institution in the United Kingdom as compared to one in South Africa. International Review of Research in Open and Distance Learning, 9(1). Yee, H. T. K., Luan, W. S., Ahmad, F. M. A., & Mahmud, R. A. (2009). Review of the literature: Determinants of online learning among students. European Journal of Soil Science, 8(2), 246–252. Yushau, B. (2006). Computer attitude, use, experience, software familiarity and perceived pedagogical useful of mathematics professors. Eurasia Journal of Mathematics, Science & Technology Education, 2(3). Zhang, D., Zhao, J. L., Zhou, L., & Nunamaker, J. F. (2004). Can e-learning replace classroom learning? Communications of the ACM, 47(5), 74–79. doi:10.1145/986213.986216
Factors Influencing User Satisfaction with Internet-Based E-Learning in Corporate South Africa
APPENDIX: SURVEY INSTRUMENT Demographic Section Name Organisation Gender: Male/Female Department/Business Unit Role/Position: Non-management/Employee manager/Learning Practitioner Age Group: 18-30/31-40/41-50/51-60 Years of internet based learning experience: 1 year/2 years/3 years/4 years/More than 4 years Computer ability rating: Novice/Intermediate/Expert English Skilled: 1st Language/2nd Language/Struggle with English
Learner Dimension I believe that working with computers... LAT01 Is very difficult (R) LAT02 Requires technical ability (R) LAT03 Can be done only if one knows a programming language such as Basic (R) LAT04 Makes a person more productive at his/her job LAT05 Is for young people only (R) Working with a computer... LAX01 Makes me very nervous LAX02 Gives me a sinking feeling when I think of trying to use a computer LAX03 Makes me feel uncomfortable LAX04 Makes me feel uneasy and confused I feel confident... LSE01 Connecting to the Internet homepage that I want LSE02 Downloading necessary materials from the Internet LSE03 Linking to desired screens by clicking on it LSE04 Using Internet search engines such as Yahoo, Google and MSN LSE05 Selecting the right search terms for Internet search.
Instructor Dimension I feel confident... IRS01 I received comments on assignments or examinations for the internet courses I have undertaken in a timely manner IAT01 That compared to traditional classroom based learning, instructors think an internet based course is useful
291
Factors Influencing User Satisfaction with Internet-Based E-Learning in Corporate South Africa
Course Dimension I feel confident... CFL01 The advantages of taking a course via the Internet outweighs any disadvantages CFL02 There were no serious disadvantages to taking a course via the Internet CFL03 Taking a course via the Internet allows me to arrange my work schedule more effectively CFL04 Taking a course via the Internet allows me to take a class I would otherwise have to miss CFL05 Taking a course via the Internet will allow me to finish the course more quickly I feel confident... CQL01 Conducting a course via the internet improves the quality of the course compared to other courses CQL02 The quality of an internet based course compares favourably to my other courses CQL03 I feel the quality of an internet based course I took was largely unaffected by conducting it via the Internet
Technology Dimension I feel the information technologies used in e-learning... TCQ01 Is very easy to use TCQ02 Has many useful functions TCQ03 Has good flexibility TCQ04 Are easily accessible TIQ001 I feel satisfied with the speed of the Internet TIQ002 I feel the quality of Internet communication is not good. (Blogs/Wikis/Emails) (R) TIQ003 I feel the fee to connect to the Internet is very expensive (R) TIQ004 I feel it’s easy to go on-line
Design Dimension DPU01 I feel that web-based learning systems would make me more effective at my job DPU02 I feel that web-based learning systems would improve my performance at work DPU03 I feel that web-based learning systems would be useful at work DPU04 I feel that web-based learning systems would enhance my productivity at work DPE01 It would be easy for me to become skilful at using web-based learning systems DPE02 Learning to operate web-based learning systems would be easy for me DPE03 I would find it easy to get a web-based learning system to do what I want it to do DPE04 I would find web-based learning systems easy to use
292
Factors Influencing User Satisfaction with Internet-Based E-Learning in Corporate South Africa
Environmental Dimension EDA01 Web based learning systems should offer a variety of ways of assessing my learning (quizzes, written work, oral presentation, etc.) EPI01 Student-to-student interaction is more difficult on a web based learning system than on other courses (R) EPI02 I learned more from my fellow students on a web based learning system than on other courses EPI03 I felt that the quality of class discussions is high throughout a web based course EPI04 It is easy to follow class discussions on a web based course EPI05 Once the students become familiar with the web-based learning system, it has very little impact on the class (R)
Perceived E-Learner Satisfaction (Dependent Variable) VDT01 I have been satisfied taking courses via the Internet VDT02 If I had an opportunity to take another course via the Internet, I would gladly do so VDT03 My choice to take this course via the Internet was a wise one VDT04 I was very satisfied with courses I have taken on the internet VDT05 I feel that courses on the Internet served my needs well VDT06 I will take as many courses via the Internet as I can VDT07 I was disappointed with the way courses have worked out that I have taken using the internet (R) VDT08 If I had an opportunity to take a variety of courses via the Internet, I would gladly do so (R) VDT09 Conducting the course via the Internet made it more difficult than other courses I have taken (R) Note: (R) – Reversed coded question
293
294
Chapter 13
Student Personality and Learning Outcomes in E-Learning:
An Introduction to Empirical Research Eyong B. Kim University of Hartford, USA
ABSTRACT Web-based courses are a popular format in the e-learning environment. Among students enrolled in Web-based courses, some students learn a lot, while others do not. There are many possible reasons for the differences in learning outcomes (e.g., student’s learning style, satisfaction, motivation, etc.). In the last few decades, students’ personality has emerged as an important factor influencing the learning outcomes in a traditional classroom environment. Among different personality models, the Big-Five model of personality has been successfully applied to help understand the relationship between personality and learning outcomes. Because Web-based courses are becoming popular, the Big-Five model is applied to find out if students’ personality traits play an important role in a Web-based course learning outcomes.
INTRODUCTION Electronic learning (e-learning) can be implemented using a variety of methods such as computerbased learning, virtual classrooms, Web-based learning, and using other Internet technologies. Even though it is possible to teach an entire DOI: 10.4018/978-1-60960-615-2.ch013
course via an email system (Phoha, 1999), most universities and colleges currently offer online courses utilizing Web technologies. Web-based instruction is a “hypermedia based instructional program which utilizes the attributes and resources of the World Wide Web to create a meaningful learning environment where learning is fostered and supported” (Khan, 1997). Web-based courses
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
Student Personality and Learning Outcomes in E-Learning
provide more convenience, flexibility, currency of material, student retention, individualized learning, and feedback than traditional classrooms while removing geographical barriers (Kiser, 1999). In addition, a variety of teaching tools (instructional options) can be implemented in Web-based courses that are not available in a traditional classroom setting. Students prefer on-line (Web-based) courses if courses are properly structured with a variety of course activities such as a discussion forum (Tello, 2007). Because of this, when Web-based courses are offered at a university, they are generally filled quickly. Satisfaction is a major factor among students taking online courses (Levy, 2007). Students often feel that course formats are compatible between e-learning course and other potential course formats (Allen, Bourhis, Burrell, and Mabry, 2002). Even though Web-based courses have become very popular recently, the effectiveness of Web-based courses might vary among students. For example, a student provided a comment such as “This is my first Web course and my last. I didn’t learn a thing.” This comment raises the issue of different learning outcomes among students in the same Web-based course. There can be many possible reasons for different learning outcomes such as the student’s personality, learning style, satisfaction, motivation, and others. In this chapter, important factors influencing learning outcomes will be reviewed briefly. The Big-Five personality model will be introduced and the relationships between the BigFive personality traits and learning outcomes will be discussed.
PREVIOUS RESEARCH ON THE EFFECTIVENESS OF ONLINE COURSES Because online courses are a relatively new method of learning, the first question anyone may have is if online courses are as effective as the traditional classroom courses. The research results of the
effectiveness of online courses are inconclusive. Proponents of online courses suggest that a technology-mediated learning environment such as online courses might achieve the following: a) improve students’ achievement (Alavi, 1994; Hiltz 1995; Maki et al., 2000; Schutte, 1997; Wetzel, et al., 1994), b) improve the students’ evaluation of the learning experience (Alavi, 1994; Hiltz, 1995), c) be more effective in teaching some type of courses (Frailey, McNell, E., & Mould, 2000), d) help to increase teacher/student interaction (Cradler, 1997; Hiltz, 1995; Schutte, 1997), and e) make learning more student-centered (Cradler, 1997). For example, when undergraduate students learned basic information technology skills, students’ performance was no different whether they enrolled in a traditional classroom course or an online course (Piccoli, Ahmad, & Ives, 2001). Using a philosophy course for teachers at both the high school and the two-year college level, Pucel and Stertz, (2005) found that the student performance between the Web-based course and the traditional classroom course were mixed. To investigate the relationships between knowledge types and effectiveness of Web-based instruction, Tracisitzmann, et al. (2006) conducted a meta-analysis of 96 research reports from 1991 to 2005 including employee and college training courses. They found that “Web-based instruction was 6% more effective than classroom instruction for teaching declarative knowledge, the two delivery media were equally effective for teaching procedural knowledge, and trainees were equally satisfied with Web-based instruction and classroom instruction.” In the same study, if the same instructional methods were used, they also found that Web-based instruction and classroom instruction were equally effective for teaching declarative knowledge. It implies that there were no media effects in teaching Web-based courses as Clark’s theory (1983, 1994) suggested, In other words, delivery media (e.g., the Web) are insignificant in affecting learning outcomes. Instead, he argued that individual differences and instructional meth-
295
Student Personality and Learning Outcomes in E-Learning
ods have more influence on learning outcomes. In addition, the teacher will continue to play a central role in online education, and the teacher’s role is not simply the repository of knowledge as in a classroom environment, but becomes one of a learning catalyst and knowledge navigator alongside the Internet (Volery and Lord, 2000).
What Influences the Effectiveness of Online Courses? Many different factors have been identified as influencing the learning outcomes of online courses such as student’s maturity and motivation (Hiltz, 1993, Lawther and Walker, 2001), cognitive skills to take full advantage of the Web medium (Heller, 1990; Trumbull, Gay and Mazur, 1992), the student’s learning strategy (Jonnassen, 1985), overall emphasis on analytical and planning skills (Dacko, 2001) and epistemological aspects (Jacobson and Spiro, 1995). Leidner and Jarvenpaa (1995) investigated the fit between different models of learning (e.g., Objectivists, Constructivist) and electronic teaching technology (e.g., Internet, distance learning, Instructor console) and suggested that universities need to match technologies to learning models to achieve effective teaching and learning. Eom, Wen, and Ashill (2006) used a sample of 397 undergraduate and graduate students in different colleges to investigate the effectiveness of online courses. They found that online education could be a superior mode of instruction if it is targeted to learners of visual and read/ write learning styles and with timely, meaningful instructor feedback of various types. In a study analyzing 19 online graduate courses, Rovai & Barnum (2003) concluded that only active interaction was a significant predictor of perceived learning in online courses. It implies that to have high perceived learning in the context of distance education, quality assurance is necessary to balance course design, pedagogy, and technology with the needs of learners.
296
Piccoli, Ahmad, and Ives, (2001) suggested a framework to measure the effectiveness of Webbased courses. Their model includes the variables of performance, self-efficacy, and satisfaction in the effectiveness dimension and these are influenced by variables in the human dimension and the design dimension. The human dimension has variables of students and instructor while the design dimension includes variables of learning model, technology, learning control, content, and interaction. They claimed that their model would allow researchers to analyze Web-based course effectiveness systematically. Selim (2007) found that there are eight critical success factors that make students accept elearning after surveying 900 students who enrolled in 37 class sections. Those critical success factors are as follows: instructor’s attitude towards and control of the technology, instructor’s teaching style, student motivation and technical competency, student interactive collaboration, e-learning course content and structure, ease of on-campus internet access, effectiveness of information technology infrastructure, and university support of e-learning activities. As discussed, previous studies suggested that the effectiveness of Web-based courses is influenced mostly by the factors in the dimensions of, instructor, student, technology, and course content (design). Regarding the student dimension, student motivation and learning style have been considered important variables influencing the learning outcome (e.g., Piccoli, Ahmad, & Ives, 2001; Lawther & Walker, 2001; Jonnassen, 1985). Student learning style was also found to have a significant impact on the perceived learning outcome of Web-based courses (Eom et al., 2006).
Students’ Learning Style The effectiveness of a course may vary among students because every student has a different learning style that is a set of personal characteristics imposed biologically and developmentally
Student Personality and Learning Outcomes in E-Learning
(Dunn, Beaudry, & Klavas, 1989). The learning style makes the same teaching method effective for some students and ineffective for others because it is believed that different students learn differently. It is also believed that students are more satisfied and learn more effectively if a student’s learning style and a teaching style are matched. Analyzing empirical studies of learning styles in 1980s, Dunn, Beaudry, and Klavas (2002) concluded that students’ achievement increases when teaching methods match their learning styles, especially biological and developmental characteristics that affect how they learn. For further details on learning style surveys refer to Hickcox (1995). Just as in traditional classroom courses, students’ learning styles have a significant impact on online course effectiveness (Barnes, Preziosi & Gooden, 2004; Dunn, Beaudry, & Klavas, 2002). Many different models of learning styles have been introduced in the literature including the Kolb learning preference model (Kolb, 1984), Gardner’s theory of multiple intelligence (Gardner, 1983), and the Myers-Briggs Personality Type Indicators (MBTI) (Myers & Briggs, 1995). The MBTI is intended to measure personality types, but MBTI profiles are known to have strong learning style implications (Pittenger, 1993). Advocates of learning style suggested that teachers should identify learning styles and use them as a basis for providing responsive instruction. Because students’ learning style is important for responsive teaching, it is recommended that instructors need to use a reliable and valid learning style preference instrument to assess students’ learning styles. There are some researchers who are skeptical about the viability and validity of using learning style of the learner to adapt or personalize a learning environment to suit the needs of the learner (Meils, 2004; Stellwagen, 2001). They claim that there are too many other variables affecting learning effectiveness and it is not the student’s learning style that influences it the most. Therefore, instructors should lead students as a facilitator of learning to learn the subject material.
Student Self-Motivation All instructors would say that students’ motivation is an important factor of academic success in all learning environments including Web-based courses. In addition, students need to be more motivated and responsible to manage Web-based courses than for traditional face-to-face courses because Web-based courses are often physically isolated and in a self-paced learning environment. Schrum and Hong (2002) suggested the success factors of online courses include self-discipline and motivation, time commitment, study skills, preference for text-based learning, access to technology, and technology experience. Among these suggested characteristics, self-discipline and motivation were found to be only predictors of online course success in a psychology course (Waschull, 2005). Self-motivation is defined as the self-generated energy that gives behavior direction toward a particular goal (Zimmerman, 1994). Self-motivation is considered one of the major factors differentiating successful students and less successful students. For example, successful students have the ability to motivate themselves when not having a strong desire to complete a task while less successful students have difficulty executing selfmotivation skills (e.g., goal setting, verbal reinforcement, self rewards or punishment) (Dembo & Eaton, 2000). It was found that student motivation has a powerful affect on attrition and completion rates of online education (Frankola, 2001; Galusha, 1997). If an e-learner is not internally motivated, external support (e.g. organizational support) will give them a better chance of completing e-learning courses (Frankola, 2001). The students’ self-motivation is one important aspect of self-regulated learning that is a critical success factor of learning outcomes of Webbased courses. Self-regulated students tend to be metacognitively, motivationally, and behaviorally active participants in their own learning (Zimmerman, 1986). These types of learners search
297
Student Personality and Learning Outcomes in E-Learning
for advice and information to learn, and they self-instruct and self-reinforces their own performance (Rohrkemper, 1989). Metacognitively, self-regulated learners plan, set goals, organize, self-monitor, and self-evaluate (Como, 1986). Behaviorally, they select, structure, and create environments that optimize learning (Zimmerman & Martinez-Pons, 1986). Self-regulated learners also show motivationally active traits such as high self-efficacy, self-attributions and intrinsic task interest (Schunk, 1986). Some students are apt to self-motivate toward their study or work while other students are lacking this intrinsic motivation. The reason is that motivation is highly related to the personality of a human being (e.g., Chamorro-Premuzic & Furnham, 2005; Major, Turner, & Fletcher, 2006). Students’ personalities influence many aspects of a learning process such as their attitude toward a course, how they interact with others, motivation, etc.
WHAT IS PERSONALITY? The term, Personality, has several different definitions (Merriam-Webster dictionary, 2004) one of which is: “the complex of characteristics that distinguish an individual especially the totality of an individual’s behavioral and emotional characteristics.” Because personalities are different, people approach problem solving, decision making, learning, and job interaction differently. It is believed that it is possible to predict people’s behavior from his or her personality (Robins and Coulter, 1999). Personality traits, such as aggressive, loyal, sociable and others are often used to describe one’s personality. Researchers have developed models that use these traits to identify one’s personality. Currently, the two most widely recognized personality models are Myers-Briggs Type Indicator (MBTI) and the Big-Five model of personality.
298
Why Personality Traits? Because learning is very complex process, predicting academic performance has been a difficult issue. One method of predicting academic performance is using student’s personality measures. O’Connor and Paunonen (2007) provided three reasons for using personality traits as predictors of academic performance. First, it has been known that personality traits affect certain habits that can have an influence on academic success. Rothstein, Paunonen, Rush, and King (1994, p. 517) have argued that, ‘‘to the extent that evaluations of performance in an academic program are influenced by characteristic modes of behavior such as perseverance, conscientiousness, talkativeness, dominance, and so forth, individual differences in specific personality traits justifiably can be hypothesized to be related to scholastic success’’. The second reason is, personality traits suggest what an individual will do whereas cognitive ability reflects what an individual can do (Furnham & Chamorro-Premuzic, 2004). A personality scale may predict long-term academic performance more accurately than a cognitive ability scale (Goff & Ackerman, 1992). Ackerman and Heggestad (1997) claimed that abilities, interests, and personality develop concurrently such that ability level and personality dispositions determine the probability of success in a particular task domain, and interests determine the motivation to attempt the task. Instead of cognitive ability, ChamorroPremuzic and Furnham (2005, 2006) suggest “intellectual competence” as an individual’s capacity to acquire and consolidate knowledge that depends not only on traditional cognitive abilities but also on self-assessed abilities and personality traits. The third reason is that, in a university environment, personality may predict academic performance better because measures of cognitive ability might lose their predictive power at this higher level of education (Ackerman, Bowen, Beier, & Kanfer, 2001; Furnham, Chamorro-Premuzic, & McDougall, 2003). Academic success is not
Student Personality and Learning Outcomes in E-Learning
solely dependent on ability factors such as IQ in any level of study (e.g., Chamorro-Premuzic & Furnham, 2006). The influence of cognitive ability to academic success of university students is often weaker than expected compared to elementary and secondary school students (O’Connor & Paunonen, 2007). There could be several reasons for this weaker relationship of cognitive ability including different assessment methods used at the university level such as attendance, class participation, in-depth domain knowledge and others. Considering these reasons, using personality traits may be an effective method to predict academic performance in universities.
Myers-Briggs Type Indicator (MBTI) MBTI is widely used to classifying personality traits in many organizations. In an MBTI survey, people answer more than 100 questions on how they act or feel in different situations (Amabile, 1996). Based on their answers, their personality is represented as a score in each category of extrovert or introvert (E or I), sensing or intuitive (S or N), feeling or thinking (F or T), and perceptive or judgmental (P or J). A brief description of each category is as follows: Extrovert or introvert shows social interaction where an extrovert is someone who is outgoing, dominant, and often aggressive and who wants to change the world. Depending on individual’s preference for gathering data, a person is categorized as sensing or intuitive. Intuitive type people like solving new problems instead of doing the same thing frequently and tend to jump to conclusions. In addition, they do not want to spend time for precision and are impatient with routine details. When a person makes decisions, s/he mainly uses feeling or thinking. Thinking types are unemotional and uninterested in people’s feelings, they like analysis and putting things into logical order. They seem to relate well only to other thinking type persons. One can be perceptive or judgmental. Judgmental types are
good planners, decisive, purposeful, and exacting. They focus on completing a task, make decisions quickly, and want only the information necessary to get a task done. These preferences can be combined to represent 16 different types of personality For example, based on the book, Introduction to Type (Briggs-Myers, 1980, pp 7-8), a combination, INFJ (introvert, intuitive, feeling, judgmental) is interpreted as quietly forceful, conscientious, and concerned for others. Such people succeed through perseverance, originality, and the desire to do whatever is needed or wanted. They are often highly respected for their uncompromising principles. Another combination, ENTJ (extrovert, intuitive, thinking, judgmental) shows a warm, friendly, candid, and decisive personality and is usually skilled in anything that requires reasoning and intelligent talk, but may sometimes overestimate what they are capable of doing. More than two million people take the MBTI measure each year in the United States alone. Due to the popularity of MBTI, there have been several studies applying MBTI measures to the academic domain (for example, Felder & Bent, 2005; Keil, Haney, & Zoffel, 2009; Mupinga, Nora, and Yaw, 2006). However, there is no hard evidence that the MBTI is a valid measure of personality (Robins & Coulter, 2009).
THE BIG-FIVE MODEL The Big-Five Model of personality represents the dominant conceptualization of personality structure in the current literature. Researchers agree that there are five robust factors of personality dimensions that can serve as a meaningful taxonomy for classifying personality attributes (Digman, 1990; Norman, 1963). Those dimensions are Extraversion, Agreeableness, Emotional Stability, Conscientiousness, and Openness to Experience. The labels of each dimension may differ slightly among the researchers. For example, Digman
299
Student Personality and Learning Outcomes in E-Learning
Table 1. Personality characteristics Personality dimension
Characterisitc traits
Agreeableness
A tendency to begood-natured, cooperative, tolerant, courteous, helpful, trusting,and forgiving.
Conscientiousness
A tendency to be responsible, hardworking, dependable, persistant, and achievement oriented, persistent
Emotional Stability
A tendency to be calm, enthusiastic,, to maintain an even temperament, andself-confidence
Openness to Experience
A tendency to be imaginative, cultured, curious, broadminded, artistically sensitive, and intelligent.
Extraversion
A tendency to be sociable, gregarious, talkative, assertive, adventurous, active, assertive, and ambitious.
(1989) labeled these dimensions Extraversion/ Introversion (Extraversion), Neuroticism/Emotional Stability or Anxiety (Emotional Stability), Friendly Compliance/Hostile NonCompliance (Agreeableness), Will to Achieve (Conscientiousness), and Intellect (Openness to Experience). A brief description of each dimension is listed in the Table 1. These dimensions of personality are common to all human beings regardless of culture, ethnic background or nationality. For example, McCrae & Costa (1997) compared samples of different cultures (American, German, Portuguese, Hebrew, Chinese, Korean, and Japanese) and concluded the Big-Five model represents a common human structure of personality (cross-cultural). The Big-Five model has been widely applied to predict variety of different job performances (Amabile, Hadley, & Kramer, 2002; Barrick, Stewart, Neubert, & Mount, 1998; Damanpour, 1991; Egan, 2005; Madjar, Oldham, & Pratt, 2002; Oldham & Cummings, 1996; Sorensen & Stu art, 2000). One of Big-Five dimension – Conscientiousness - showed consistent relationships with all job performances in different occupational groups. Other personality dimensions predict job performance differently depending on the situation and the occupation. For example, Extraversion was a valid predictor for jobs involving social interaction, (e.g., managers and sales). Openness to Experience was considered as a valid predictor of the training proficiency.
300
Conscientiousness Conscientiousness reflects dependability, that is, being careful, thorough, responsible, organized, and planning-oriented (Botwin & Buss, 1989; Noller, Law, & Comrey, 1987). In addition, it is strongly related to hardworking, achievementoriented, and persevering (Bernstein, Garbin, & McClellan, 1983; Costa & McCrae, 1988; Digman & Inouye, 1986). Due to these traits, Conscientiousness showed consistent relations with all job performance criteria for all occupational groups including academic achievement.
Openness People with high Openness are imaginative, curious, broad-minded, intelligent, and open to experience or culture. Due to these characteristics, Openness was found to be a valid predictor of training proficiency (but not for job proficiency), possibly because individuals who score high on this dimension are more likely to have positive attitudes toward learning experiences in general (Barrick & Mount, 1991). Measures of Openness to Experience may provide us a good indicator of which individuals are “training ready”. In other words, it may be useful in identifying those who are most willing to engage in learning experiences and who are most likely to benefit from training programs.
Student Personality and Learning Outcomes in E-Learning
Emotional Stability Emotional Stability is also called Stability, Emotionality, or Neuroticism (Norman, 1963). Common traits associated with this factor include being anxious, depressed, angry, embarrassed, emotional, worried, and insecure. These traits are important personality characteristics for anyone to live peacefully. However, most of the correlations for Emotional Stability with job performance were relatively low. That is, as long as an individual is emotionally stable, the predictive value of any differences in job performance is minimized (Barrick & Mount, 1991).
Extraversion It is widely agreed (Botwin & Buss, 1989; Krug & Johns, 1986; McCrae &Costa, 1985) that Extraversion shows the traits of being sociable, gregarious, talkative, and active which are important characteristics for predicting job performance requiring interpersonal dispositions. In some training, Extraversion is a valid predictor of training proficiency across occupations because being active and sociable may lead individuals to be more involved in training and consequently learn more (Mount & Barrick, 1998).
Agreeableness Agreeableness shows the traits of being courteous, flexible, trusting, good-natured, cooperative, forgiving, and tolerant. It could be termed Compliance or Hostile Non-Compliance (Digman & Takemoto-Chock, 1981), or Love (Peabody & Goldberg 1989) that may be related to friendly cooperation in a work environment. Research suggested that Agreeableness is not an important predictor of job performance (Barrick & Mount, 1991). It appears that being courteous, trusting, straightforward, and softhearted has a smaller impact on job performance than being talkative, active, and assertive.
THE BIG-FIVE PERSONAL TRAITS AND ACADEMIC ACHIEVEMENT There have been studies that investigated the possible relationships among different learning approaches and individual personality traits. For example, Furnham, Christopher, Garwood, & Martin, (2007) applied a general knowledge test (Irwing, Cammock, & Lynn, 2001), a learning styles questionnaire (Biggs, 1987), and a measure of the Big-Five personality variables (Costa & McCrae, 1992) to four hundred and thirty students from four universities and concluded that openness to experience is associated with intelligence. Among the five personality traits, it was found that the openness to experience was positively linked with deep learning style in the study of 852 university students (Chamorro-Premuzic & Funham, 2009). Other studies supported this finding where learners with openness to experience and learners with deep learning style share the characteristics of inquisitive, intrinsically motivated, and intellectual profile (ChamorroPremuzic & Fumham, 2005; Costa & McCrae, 1992). Because Openness is a personality trait that positively linked to knowledge and skill acquisition, it is significantly related to learning and individual differences (Furnham, Christopher, GalWood, & Martin, 2007). In addition, Openness is known to influence intellectual curiosity, need for cognition, and cognitive ability (ChamorroPremuzic & Furnham, 2005). It implies that students with a strong openness trait are expected to be intrinsically motivated in their studies, enjoy their learning experience more, that may result in higher achievement. In addition to the personality and learning style differences, learners show differences in emotional intelligence (Parker et al., (2004); Petrides, Frederickson, & Furnham, 2004), and grit (Duckworth, Peterson, Matthews, & Kelly, 2007). These differences should be considered in order to understand the dispositional basis of learning approaches. Grit, which is highly corre-
301
Student Personality and Learning Outcomes in E-Learning
lated with Big-Five Conscientiousness, is defined as perseverance and passion for long-term goals. Duckworth, et al., (2007) concluded that grit accounted for an average of 4% of the variance in success outcomes in an academic environment using samples of 2 adult groups (1545 and 690 subjects), grade point average among Ivy League undergraduates (138 subjects), retention in 2 classes of West Point cadets (1,218 and 1,308 subjects), and ranking in the National Spelling Bee (175 subjects). Emotional Intelligence refers to the ability to understand, manage people, and to act wisely in human relations (Thorndike, 1920). As such, Emotional Intelligence (EI) is held to explain how emotions advance life goals. However, studies on the direct relationships between EI and the academic achievement are somewhat inconclusive. For example, Parker et al., (2004) argued that the academic success was strongly associated with several dimensions of emotional intelligence while Petrides, Frederickson, & Furnham (2004) claimed that correlations between EI and academic achievement were small and not statistically significant. Petrides, Frederickson, & Furnham (2004) found that EI has higher correlation with higher life satisfaction, better perceived problem-solving and coping ability, and lower anxiety even though they are not statistically significant. These attributes could influence academic achievement indirectly.
(Chamorro-Premuzic & Furnham, 2005). Not only are they more motivated as described above, conscious persons (students) are more likely to be careful, thorough, responsible, organized, and planning-oriented that seemed to be important factors in high achievement in university courses.
Conscientiousness and Academic Achievement
The association of Extraversion with academic performance has also generated mixed research findings. For example, some researchers suggested that Extraversion is negatively associated with academic performance (Bauer & Liang, 2003; Furnham, et al., 2003; Furnham & ChamorroPremuzic, 2004, Maki & Maki, 2003). Extraversion has traits such as sociable, gregarious, talkative, and active that may interfere with students’ academic performance in courses that emphasize independent work. Chamorro-Premuzic & Furnham (2005) suggested that extraverts spend more time socializing while introverts spend more
Many previous studies have shown that conscientiousness has a relationship with post-secondary academic success (Bauer & Liang, 2003; Chamorro-Premuzic & Furnham, 2003a; ChamorroPremuzic & Furnham, 2003b; Conard, 2006; Furnham et al., 2003; Phillips, Abraham, & Bond, 2003; Wolfe & Johnson, 1995; Lounsbury, Sundstrom, Loveland, & Gibson, 2003). Conscientious students are motivated to perform better than less conscientiousness students in academic settings
302
Openness to Experience and Academic Achievement Research findings of the relationship between Openness and academic performance have produced mixed results. Several studies reported that Openness to Experience could be used to predict academic achievement (Farsides & Woodfield, 2003; Lievens et al., 2002; Phillips et al., 2003; Rothstein et al., 1994). A reason might be that one important factor of academic success, intelligence, has positive correlation with Openness to Experience measures (Chamorro-Premuzic & Furnham, 2005). Other studies reported that there was no significant correlation between Openness and academic performance (e.g., Conard, 2006; Furnham, et. al., 2003). Therefore, courses that require high intellectual ability may possibly have a strong relationship with the Openness to Experience factor.
Extraversion and Academic Achievement
Student Personality and Learning Outcomes in E-Learning
time in studying. There are studies that suggest no significant relationship between Extraversion and academic performance (e.g., Conard, 2006; Wolfe & Johnson, 1995). Even though there have been many research findings suggesting negative relations between Extraversion and academic achievement, this relationship is not yet firmly established as a general rule (O’Connor & Paunonen, 2007). Thus, the relationships between academic achievement and Extraversion need more investigation.
Emotional Stability and Academic Achievement A few studies have found positive associations between Emotional Stability and academic performance (Chamorro-Premuzic & Furnham, 2003a; Chamorro-Premuzic & Furnham, 2003b). In general, it is believed that emotionally stable students may perform better academically than neurotic students. If students are not emotionally stable, they may experience anxiety and stress that could damage their academic performance (ChamorroPremuzic & Furnham, 2005). However, through the review of empirical studies, it is concluded that Emotional Stability is mostly unassociated with academic performance of college students (O’Connor & Paunonen, 2007).
Agreeableness and Academic Achievement It seemed that Agreeable students do not necessarily perform better in university courses. The traits of Agreeableness are good-natured, flexible, trusting, cooperative, forgiving, and tolerant, and they do not appear to be very influential to academic performance in college courses. Review of previous literature suggests that Agreeableness is not an important determinant of academic performance in higher education (O’Connor & Paunonen, 2007). Some studies show a positive association with academic performance (Farsides
& Woodfield, 2003; Conard, 2006) while other studies show a negative relationships (Rothstein et al., 1994).
Big-Five Personality Measuring Inventories and Validity/ Reliability Issues Several personal measuring inventories have been developed to scale the Big-Five Personality factors. Some examples of representative measuring inventories are, the Big-Five Inventory (BFI) (Benet-Martinez & John, 1998; John, Donahue, & Kentle, 1991), the International Personality Inventory Pool (IPIP) (Pennsylvania State University, n.d.), the NEO Five Factor Inventory (NEO-FFI) (Costa & McCrae, 1999), the Personal Style Inventory (PSI) (Lounsbury & Gibson, 1998), the 5 PFT (Elshout & Akkerman, 1975) and the Revised NEO Personality Inventory (NEO-PI-R) (Costa & McCrae, 1992). Among these different measures, the most common measures applied to academic research are the Revised NEO Personality Inventory and the NEO Five-Factor Inventory (O’Connor & Paunonen, 2007). Some of these measuring inventories are commercially available and have a proven validity and reliability. For example, Kim and Schniederjans (2004) used a questionnaire entitled the Wonderlic® Personality Characteristics Inventory™ (PCI) survey assessment instrument (Barrick & Mount, 1991; Mount, et al., 1999; Wonderlic, 2004). This questionnaire has been recognized as a reliable tool by psychology researchers for over 50 years as a means to measure personality characteristics. Wonderlic, Inc. reported the overall instrument validity very high, showing alpha coefficients for extraversion, 0.86; agreeableness, 0.82; conscientiousness, 0.87; emotional stability, 0.86; and openness to experience, 0.83 (Mount, et al., 1999). Test-retest reliability is measured using various groups of business and education participants, which had an average reliability coefficient no lower than 0.77 for any group.
303
Student Personality and Learning Outcomes in E-Learning
Construct validity and criterion-related validity are well proven through numerous research studies (Mount & Barrick, 1998; Wonderlic, 2004, www. wonderlic.com).
EXAMPLE STUDY OF APPLYING BIG-FIVE MODEL TO A WEB-BASED COURSE As described in the above, there have been many studies of applying the Big-Five personality model to investigate relationships between student’s performance and personality traits. Many of those studies were conducted using the traditional classroom environment. To investigate the relationship between students’ personality traits and their academic performance in a Web-based course in a business discipline, two empirical studies have been conducted (Kim & Schniederjans, 2004; Schniederjans & Kim, 2005). Schniederjans and Kim (2005) used over 500 students enrolled in two online sections of an undergraduate management information systems (MIS) course. These sections were offered as a self-paced course using the Blackboard Educational Software System. Required materials were a textbook, its supplementary materials (e.g., CDROM), and instructor prepared, Web-available, MS PowerPoint slides of each chapter. Students needed to take four exams to complete the course. The total points of four exams represented the student’s performance that was used as the actual achievement score (or the dependent variable) in their study. They tried to find out if each of Big-Five factors was significantly correlated with students’ achievement scores using a questionnaire entitled the Wonderlic® Personality Characteristics Inventory™ (PCI) survey assessment instrument. In addition, a multiple regression model was tested if course performance could be predicted using the PCI scores. The PCI survey instrument consists of 150 questions covering the Big-Five scales. All of the scales are based on an index
304
score ranging from 0 to 100 percent, where 100 percent represents a high degree of the particular personality characteristic. The PCI instrument’s scales are designed to be used individually or together in models for predicting an individual’s potential for being able to handle particular jobs or training. As in many previous studies, four personality characteristics (Conscientiousness, Openness to Experience, Emotional Stability, and Agreeableness) were significantly correlated with student achievement scores of the Web-based undergraduate MIS course. As expected, Extraversion showed no significant correlationship with student achievement scores. It was also found that student’s performance in the total Web-based course could be predicted if each student’s PCI score was available. Using the PCI scores for the four independent variables from a sample of 50 students, their predicted achievement scores was within the actual student achievement scores range. For example, the resulting mean for the predicted achievement scores was 81.263 (std. dev. 16.454) and the actual achievement mean score was 81.320 (std. dev. 16.450). The correlation between the two sets of values turned out to be near perfect at 0.999 (p=0.000). Comparing the two means resulted in a t-test value of only 0.650 (df=49), supporting the assertion that there is no statistically significant (p=0.518) difference or inaccuracy in the forecast values between the actual and predicted student achievement scores. It was concluded that multiple regression model did accurately predict the group of student achievement scores.
Implications of the Example Study Similar to previous study findings, Schniederjans and Kim (2005) found that Conscientiousness, Openness to Experience, Emotional Stability, and Agreeableness are significant predictors of academic performance in a Web-based MIS course. Extraversion was not an important determinant. It
Student Personality and Learning Outcomes in E-Learning
can be interpreted as follows; to be successful in a Web-based course that requires mostly individual work and self-paced learning, a student needs to have high scores in personality traits of Conscientiousness (hard work, motivation), Openness (to new method of learning), Agreeableness (tolerant when a computer does not work as expected), and emotional stability (when encountered with an unexpected situation because help is not often immediately available). If students have high scores in Extraversion, Web-based courses are not a recommendable alternative because they may prefer socially interactive learning environment (face-to-face environment such as a traditional classroom). Instead of allowing students to simply enroll in a Web-based course for convenience, college advisors need to explain the possible consequences to extraverted students not to waste their time and money. If students are high in Conscience and Openness scores (being careful, motivated, responsible, organized, curious, and intelligent), they may learn effectively when taking Web-based courses even in their freshmen year. When advising a student, it is also recommended to use the regression model to explain what grade a student will possibly get as Schniederjans and Kim (2005) demonstrated. Through this type of arrangement, students may be more satisfied and as a result, universities may improve retention rates. As suggested in Kim and Schniederjans (2004), PCI screening of learners can be effectively applied in an employee training (adult education/ training) environment. In 2009, U.S. companies’ cost of employee training (including payroll and spending on external products and services) was $52.2 billion that is $1,036 per learner (Training Magazine, 2009). With this much spending, companies want to maximize the training effectiveness by matching employees’ preference with the training method (online or traditional classroom). Since full-time employees generally prefer online courses due to the time constraint, personality
screening may be a very important issue to get effective training outcomes.
Cautionary Notes It is true that personality traits will provide instructors with many useful insights of students’ behavior. However, it is claimed that learning is much more contextual oriented than personality traits (Watkins, 1998). The contextual orientation of learning makes instructors place the first priority on student’s previous knowledge about subject matter rather than on students’ personality traits to have more effective learning outcome (Lalley & Gentile, 2009). For example, through comparing Big-Five personality traits of students in traditional setting and Web-based course, Caspi, et. al. (2006) concluded that social participation (in learning environment) is considered as a result of educational context while individual differences play secondary role. It implies that students tend to be discouraged to participate if they do not know the subject well. If the previous knowledge level is equal among students, then personality traits play an important role in learning effectiveness. When investigating the relationships between personality traits and academic achievement, it is necessary to make students’ background knowledge equal. To assure students have sufficient prior knowledge, instructors (or investigators) may use formative assessment (Sadler, 1998). Formative assessment may be any method to assess the current status of student knowledge about a subject matter such as quizzes, discussions, games, a one-page paper, and others. Another concern is that the personality might not be the biggest determinant of students’ academic achievement as claimed. In a recent study, Chamorro-Premuzic and Funham (2009) found that the overlap between learning approaches and personality traits is lower than previously suggested. Emotional intelligence is the most significant direct predictor of GPA among community college students who enrolled in online
305
Student Personality and Learning Outcomes in E-Learning
courses (Berenson, Boyles, & Weaver, 2008). In another study, none of the Big-Five personality characteristics predicted performance or satisfaction differentially for the two course formats (Web-based & traditional lecture format) of general psychology course (Maki & Maki, 2003).
FUTURE RESEARCH DIRECTIONS The regression model to predict students’ performance based on the student’s PCI scores developed by Schniederjans and Kim (2005) is a useful tool for college advisors and administrators. Universities can develop their own template model of screening students who are recommended to register for Web-based courses. In addition, universities can use students’ PCI scores to plan course offerings. For example, if many students have high scores in Conciseness and Openness, universities should offer more Web-based courses for convenience and other benefits. In other words, universities can use student personality profiles to estimate the possible number of registrations in traditional classroom courses and Web-based courses. To do that, the predictability of the regression model should be validated by additional studies using several courses in different disciplines. Many previous studies examining the relationships between personality and academic performance were limited either in domain (one subject area or topic) or by using one single indicator of academic performance (e.g., GPA or course grade; a few studies used satisfaction) (O’Connor & Paunonen, 2007; Trapmann, et al., 2007). For example, a meta analysis conducted by Trapmann, et. al, (2007) reported that 68% of previous studies used GPA as an academic performance indicator (followed by satisfaction of 15%). In the same study, more than half of studies were conducted on multiple majors (29%) followed by psychology majors (27%). It is true that GPA is easy to use and is the result of many different types of courses and grading systems that may reflect overall achieve-
306
ment of a student’s college academic performance. However, GPA (or grades) may be determined by including non-learning performance factors such as attendance, team project, class participation, and others (O’Connor & Paunonen, 2007). If GPA (or grades) is used as an academic performance indicator, it may not show the direct relationships between learning outcomes and the personality of a student. Therefore, it is recommended that an investigation of the direct relationships between the personality of students and different academic performance indicators such as exam grades, project performance, and class participation points should include all the performance indicators simultaneously in one research study. In this manner, it is clear what personality traits affect what aspects of academic performance. It is recommended that the structural equation model (SEM) be used to investigate the effect of personality on students’ learning outcomes in the context of university Web-based courses in order to see valid relationships among variables. Many previous studies investigating the relationships between personality and academic performance did not utilize the SEM method. It may discover the role of Big-Five personality traits in learning outcomes of Web-based courses when other determinants (e.g., learning styles, interactions) exist. Using many Web-based courses in different subject areas could reduce the effect of discipline specific biases in online education (as an example, a study by Eom et al, 2006). For example, discussion-oriented psychology courses show personality is a secondary role (Caspi et. al., 2006) while personality traits mostly determine the course performance in Web-based MIS courses (Schniederjans & Kim, 2005). Another future research project could employ narrow Big-Five personality traits to predict student’s learning outcome of Web-based courses. There are fewer studies using the narrow facets of the Big-Five Factors than the ones utilizing the broad Big-Five factors. Each Big-Five personality trait (broad Big-Five factor) has narrow personality
Student Personality and Learning Outcomes in E-Learning
Table 2. The NEO-PI-R narrow personality traits (facets) Conscientiousness
achievement-striving, competence, deliberation, dutifulness, order, self-discipline
Agreeableness
trust, straightforwardness, altruism, compliance, modesty, tendermindedness
Neuroticism
anxiety, hostility, depression, self-consciousness, impulsiveness, vulnerability to stress
Extraversion
warmth, gregariousness, assertiveness, activity, excitement seeking, positive emotion
Openness to experience
fantasy, aesthetics, feelings, actions, ideas, values
traits at a lower level of the personality hierarchy. One of the personality inventories that measures narrow personality traits is the Revised NEO Personality Inventory (NEO-PI-R) (Costa & McCrae, 1992). It assesses six narrow personality traits for each of the broad Big-Five factors as shown in Table 2. Narrow personality traits may have more direct relationships with academic performance. For example, even though positive associations have been found between all six facets of Conscientiousness and academic success, the strength of the positive relations vary widely across facets (De Fruyt & Mervielde, 1996). Among the Conscientiousness facets, achievement-striving and self-discipline have been found strong predictors of academic performance (Chamorro-Premuzic & Furnham, 2003a; De Fruyt & Mervielde, 1996). Until now, not many of these studies are examining student’s performance in Web-based courses. Investigating the relationships between narrow facets of Big-Five personality traits and academic performance in Web-based course will be a valuable study.
CONCLUSION It is often mentioned that learning is a very complex process. Even though there have been many studies to measure learning effectiveness, no single universally agreed upon model exists yet. The recent popularity of online education makes accurate assessment of learning effectiveness more important than before because online courses exist mainly in students’ self-paced learning en-
vironment that is greatly influenced by student’s individual differences. One good measuring tool of individual differences among students is the Big-Five personality model that has been a proven model of human personality dimensions. Using the Big-Five personality traits, it is possible to explain the impact of individual differences on the learning outcomes of Web-based courses as well as traditional classroom courses. In the online education environment, the suggested regression model (Schniederjans & Kim, 2005) will play a vital role in determining if students are suitable to take Web-based courses. This regression model together with individuals’ personality traits can be applied to university courses as well as the industry training environment to maximize the benefit of online education while saving money and time. To make online education more effective, the Big-Five personality traits of students should be used with caution. Instructors and learners should not consider personality traits as a panacea for predicting learning outcomes. There could be other factors that influence the academic performance in online education.
REFERENCES Ackerman, P. L., Bowen, K. R., Beier, M. E., & Kanfer, R. (2001). Determinants of individual differences and gender differences in knowledge. Journal of Educational Psychology, 93, 797–825. doi:10.1037/0022-0663.93.4.797
307
Student Personality and Learning Outcomes in E-Learning
Ackerman, P. L., & Heggestad, E. D. (1997). Intelligence, personality, and interests: Evidence for overlapping traits. Psychological Bulletin, 121, 219–245. doi:10.1037/0033-2909.121.2.219
Berenson, R., Boyles, G., & Weaver, A. (2008). Emotional intelligence as a predictor for success in online learning. International Review of Research in Open and Distance Learning, 9(2), 1–16.
Alavi, M. (1994). Computer-mediated collaborative VLEs, or can it take center stage, as in case distance learning: An empirical evaluation. Management Information Systems Quarterly, 18(2), 159–174. doi:10.2307/249763
Bernstein, I. H., Garbin, C. P., & McClellan, P. G. (1983). A confirmatory factoring of the California Psychological Inventory. Educational and Psychological Measurement, 43, 687–691. doi:10.1177/001316448304300302
Allen, M., Bourhis, J., Burrell, N., & Mabry, E. (2002). Comparing student satisfaction with distance education to traditional classrooms in higher education: A meta-analysis. American Journal of Distance Education, 16(2), 83–97. doi:10.1207/ S15389286AJDE1602_3
Botwin, M. D., & Buss, D. M. (1989). Structure of act-report data: Is the five-factor model of personality recaptured? Journal of Personality and Social Psychology, 56, 988–1001. doi:10.1037/00223514.56.6.988
Amabile, T. M. (1996). Creativity in context. Boulder, CO: Westview Press. Amabile, T. M., Hadley, C. N., & Kramer, S. J. (2002). Creativity under the gun. Harvard Business Review, 52–61. Barnes, F. B., Preziosi, R. C., & Gooden, D. J. (2004). An examination of the learning styles of online MBA students and their preferred course delivery methods. New Horizons in Adult Education, 18(2), 16–30. Barrick, M. R., & Mount, M. K. (1991). The BigFive personality dimensions and performance: A meta-analysis. Personnel Psychology, 44(1), 1–26. doi:10.1111/j.1744-6570.1991.tb00688.x Bauer, K. W., & Liang, Q. (2003). The effect of personality and precollege characteristics on firstyear activities and academic performance. Journal of College Student Development, 44, 277–290. doi:10.1353/csd.2003.0023 Benet-Martinez, V., & John, O. P. (1998). Los Cinco Grandes across cultures and ethnic groups: Multitrait-multimethod analyses of the Big-Five in Spanish and English. [from http://www.testmasterinc.com/products/]. Journal of Personality and Social Psychology, 75, 729–750. Retrieved November 26, 2007. doi:10.1037/0022-3514.75.3.729 308
Briggs-Myers, I. (1980). Introduction to type (3rd ed.). Palo Alto, CA: Consulting Psychologists Press. Cardler, J. (1997). Summary of current research and evaluation of findings on technology in education. Working Paper, Educational Support Systems, San Mateo, CA. Caspi, A., Chajut, E., Saporta, K., & Marom, R. B. (2006). The influence of personality on social participation in learning environments. Learning and Individual Differences, 16, 129–144. doi:10.1016/j.lindif.2005.07.003 Chamorro-Premuzic, T., & Funham, A. (2009). Mainly openness: The relationship between the Big-Five personality traits and learning approaches. Learning and Individual Differences, 19, 524–529. doi:10.1016/j.lindif.2009.06.004 Chamorro-Premuzic, T., & Furnham, A. (2003a). Personality traits and academic examination performance. European Journal of Personality, 17, 237–250. doi:10.1002/per.473 Chamorro-Premuzic, T., & Furnham, A. (2003b). Personality predicts academic performance: Evidence from two longitudinal university samples. Journal of Research in Personality, 37, 319–338. doi:10.1016/S0092-6566(02)00578-0
Student Personality and Learning Outcomes in E-Learning
Chamorro-Premuzic, T., & Furnham, A. (2005). Personality and intellectual competence. Mahwah, NJ: Lawrence Erlbaum Associates. Chamorro-Premuzic, T., & Furnham, A. (2006). Intellectual competence and the intelligent personality: A third way in differential psychology. Review of General Psychology, 10(3), 251–267. doi:10.1037/1089-2680.10.3.251 Clark, R. E. (1983). Reconsidering research on learning from media. Review of Educational Research, 53, 445–460. Clark, R. E. (1994). Media will never influence learning. Educational Technology Research and Development, 42, 21–29. doi:10.1007/ BF02299088 Conard, M. A. (2006). Aptitude is not enough: How personality and behavior predict academic performance. Journal of Research in Personality, 40, 339–346. doi:10.1016/j.jrp.2004.10.003 Corno, L. (1986). The metacognitive control components of self-regulated learning. Contemporary Educational Psychology, 11, 333–346. doi:10.1016/0361-476X(86)90029-9 Costa, P. T. Jr, & McCrae, R. R. (1988). From catalog to classification: Murray’s needs and the five-factor model. Journal of Personality and Social Psychology, 55, 258–265. doi:10.1037/00223514.55.2.258 Costa, P. T. Jr, & McCrae, R. R. (1992). Revised NEO Personality Inventory (NEO-PI-R) and NEO Five-Factor Inventory (NEO-FFI) professional manual. Odessa, FL: Psychological Assessment Resources. Damanpour, F. (1991). Organizational innovation: A meta-analysis of effects of determinants and moderators. Academy of Management Journal, (September): 555–590. doi:10.2307/256406
De Fruyt, F., & Mervielde, I. (1996). Personality and interests as predictors of educational streaming and achievement. European Journal of Personality, 10, 405–425. doi:10.1002/ (SICI)1099-0984(199612)10:5<405::AIDPER255>3.0.CO;2-M Dembo, M., & Eaton, M. (2000). Self regulation of academic learning in middle-level schools. The Elementary School Journal, 100(5), 473–490. doi:10.1086/499651 Digman, J. M. (1990). Personality structure: Emergence of the five-factor model. Annual Review of Psychology, 41, 417–440. doi:10.1146/annurev. ps.41.020190.002221 Digman, J. M., & Inouye, J. (1986). Further specification of the five robust factors of personality. Journal of Personality and Social Psychology, 50, 116–123. doi:10.1037/0022-3514.50.1.116 Digman, J. M., & Takemoto-Chock, N. K. (1981). Factors in the natural language of personality: Re-analysis, comparison, and interpretation of six major studies. Multivariate Behavioral Research, 16, 149–170. doi:10.1207/s15327906mbr1602_2 Duckworth, A. L., Peterson, C., Matthews, M. D., & Kelly, D. R. (2007). Grit: Perseverance and passion for long-term goals. Journal of Personality and Social Psychology, 92, 1087–1101. doi:10.1037/0022-3514.92.6.1087 Dunn, R., Beaudry, J., & Klavas, A. (1989). Survey research on learning styles. Educational Leadership, 46, 50–58. Dunn, R., Beaudry, J., & Klavas, A. (2002). Survey of research on learning styles. California Journal of Science Education, 2(2), 75–79. Egan, T. M. (2005). Factors influencing individual creativity in the workplace: An examination of quantitative empirical research. Advances in Developing Human Resources, 160–181. doi:10.1177/1523422305274527
309
Student Personality and Learning Outcomes in E-Learning
Elshout, J. J., & Akkerman, A. E. (1975). Vijf Persoonlijkheids-faktoren test 5 PFT. Nijmegen: Berkhout BV.
Gardner, H. (1983). Frames of mind: The theory of multiple intelligences. New York, NY: Basic Books.
Eom, S. B., Wen, H. J., & Ashill, N. (2006). The determinants of students’ perceived learning outcomes and satisfaction in university online education: An empirical investigation. Decision Sciences Journal of Innovative Education, 4(2). doi:10.1111/j.1540-4609.2006.00114.x
Gardner, W. L., & Martinko, M. J. (1996). Using the Myers-Briggs type indicator to study managers: A literature review and research agenda. Journal of Management, 22(1), 45–83. doi:10.1177/014920639602200103
Farsides, T., & Woodfield, R. (2003). Individual differences and undergraduate academic success: The roles of personality, intelligence, and application. Personality and Individual Differences, 34, 1225–1243. doi:10.1016/S0191-8869(02)00111-3 Felder, R. M., & Bent, R. (2005). Understanding student differences. Journal of Engineering Education, 94(1), 57–72. Frailey, D., McNell, E., & Mould, D. (2000). Forum: Debating distance learning. Communications of the ACM, 43(2), 11–15. Furnham, A., & Chamorro-Premuzic, T. (2004). Personality and intelligence as predictors of statistics examination grades. Personality and Individual Differences, 37, 943–955. doi:10.1016/j. paid.2003.10.016 Furnham, A., Chamorro-Premuzic, T., & McDougall, F. (2003). Personality, cognitive ability, and beliefs about intelligence as predictors of academic performance. Learning and Individual Differences, 14, 49–66. Furnham, A., Christopher, A., Garwood, J., & Martin, G. (2007). Approaches to learning and the acquisition of general knowledge. Personality and Individual Differences, 43, 1563–1571. doi:10.1016/j.paid.2007.04.013 Galusha, J. M. (1997). Barriers to learning in distance education. Interpersonal Computing and Technology: An Electronic Journal for the 21st Century, 5(3/4), 6-14.
310
Goff, M., & Ackerman, P. L. (1992). Personality–intelligence relations: Assessment of typical intellectual engagement. Journal of Educational Psychology, 84, 537–552. doi:10.1037/00220663.84.4.537 Hickcox, L. K. (1995). Learning styles: A survey of adult learning style inventory models. In Sims, R., & Sims, S. (Eds.), The importance of learning styles: Understanding the implications for learning, course design, and education (pp. 25–48). Westport, CT: Greenwood Press. Hiltz, S. R. (1995). Teaching in a virtual classroom. International Journal of Educational Telecommunications, 1(2), 185–198. John, O. P., Donahue, E. M., & Kentle, R. L. (1991). The ‘‘Big-Five’’ inventory – versions 4a and 54 (Technical Report). Berkley, CA: University of California, Institute of Personality Assessment and Research. Keil, C., Haney, J., & Zoffel, J. (2009). Improvements in student achievement and science process skills using environmental health science problembased learning curricula. Electronic Journal of Science Education, 13(1), 3–20. Khan, B. (1997). Web-based instruction. Englewood Cliffs, NJ: Educational Technology Publications. Kim, E. B., & Schniederjans, M. (2004). The role of personality in Web-based distance education course. Communications of the ACM, 47(3), 95–98. doi:10.1145/971617.971622
Student Personality and Learning Outcomes in E-Learning
Kiser, K. (1999). 10 things we know so far about online training. Training (New York, N.Y.), 36(11), 66–74. Kolb, D. A. (1984). Experiential learning: Experiences as the source of learning and development. Englewood Cliffs, NJ: Prentice Hall. Krug, S. E., & Johns, E. F. (1986). A large scale cross-validation of second-order personality structure defined by the 16PF. Psychological Reports, 59, 683–693. Lalley, J. P., & Gentile, J. R. (2009). Adapting instruction to individuals: Based on the evidence, what should it mean? International Journal of Teaching and Learning in Higher Education, 20(3), 462–475. Leidner, D. E., & Jarvenpaa, S. L. (1995). The use of information technology to enhance management school education: A theoretical view. Management Information Systems Quarterly, 19(3), 265–291. doi:10.2307/249596 Levy, Y. (2007). Comparing dropouts and persistence in e-learning courses. Computers & Education, 48, 185–204. doi:10.1016/j.compedu.2004.12.004 Lievens, F., Coetsier, P., De Fruyt, F., & De Maeseneer, J. (2002). Medical students’ personality characteristics and academic performance: A five-factor model perspective. Medical Education, 36, 1050–1056. doi:10.1046/j.13652923.2002.01328.x Lounsbury, J. W., & Gibson, L. W. (1998). Personal style inventory: A work-based personality measurement system. Knoxville, TN: Resource Associates. Lounsbury, J. W., Sundstrom, E., Loveland, J. M., & Gibson, L. W. (2003). Intelligence, ‘‘Big-Five’’ personality traits, and work drive as predictors of course grade. Personality and Individual Differences, 35, 1231–1239. doi:10.1016/S01918869(02)00330-6
Madjar, N., Oldham, G. R., & Pratt, M. G. (2002). There’s no place like home? The contributions of work and nonwork creativity support to employees’ creative performance. Academy of Management Journal, (August): 757–767. doi:10.2307/3069309 Major, D. A., Turner, J. E., & Fletcher, T. D. (2006). Linking proactive personality and the Big-Five to motivation to learn and development activity. The Journal of Applied Psychology, 91(4), 927–935. doi:10.1037/0021-9010.91.4.927 Maki, R. H., & Maki, W. S. (2003). Prediction of learning and satisfaction in Web-based and lecture courses. Journal of Educational Computing Research, 28(3), 197–219. doi:10.2190/DXJU7HGJ-1RVP-Q5F2 Maki, R. H., Maki, W. S., Patterson, M., & Whittmaker, P. D. (2000). Evaluation of a Web-based introductory psychology course: Learning and satisfaction in online versus lecture courses. Behavior Research Methods, Instruments, & Computers, 32(2), 230–239. doi:10.3758/BF03207788 McCrae, R. R., & Costa, P. T. Jr. (1985). Updating Norman’s adequate taxonomy: Intelligence and personality dimensions in natural language and in questionnaires. Journal of Personality and Social Psychology, 49, 710–721. doi:10.1037/00223514.49.3.710 McCrae, R. R., & Costa, P. T. Jr. (1997). Personality trait structure as a human universal. The American Psychologist, 52, 509–516. doi:10.1037/0003066X.52.5.509 Melis, E., & Monthienvichienchai, R. (2004). They call it learning style but it’s so much more. World Conference on Elearning in Corporate, Government, Healthcare, and Higher Education (eLearn2004). Mount, M. K., & Barrick, M. R. (1998). Five reasons why the `Big-Five’ article has been frequently cited. Personnel Psychology, 51(4), 849–857. doi:10.1111/j.1744-6570.1998.tb00743.x
311
Student Personality and Learning Outcomes in E-Learning
Mount, M. K., Barrick, M. R., Laffitte, L. J., & Callans, M. C. (1999). Administrator’s guide for the personal characteristics inventory. Technical Manual. Libertyville, IL: Wonderlic, Inc. Mupinga, D. M., Nora, R. T., & Yaw, D. C. (2006). The learning styles, expectations, and needs of online students. College Teaching, 54(1), 185–189. doi:10.3200/CTCH.54.1.185-189 Noller, P., Law, H., & Comrey, A. L. (1987). Cattell, Comrey, and Eysenek personality factors compared: More evidence for the five robust factors? Journal of Personality and Social Psychology, 53, 775–782. doi:10.1037/0022-3514.53.4.775 Norman, W. T. (1963). Toward an adequate taxonomy of personality attributes: Replicated factor structure in peer nomination personality ratings. Journal of Abnormal and Social Psychology, 66, 574–583. doi:10.1037/h0040291 O’Connor, M. C., & Paunonen, S. V. (2007). Big-Five personality predictors of post-secondary academic performance. Personality and Individual Differences, 43, 971–990. doi:10.1016/j. paid.2007.03.017 Oldham, G. R., & Cummings, A. (1996). Employee creativity: Personal and contextual factors at work. Academy of Management Journal, (June): 607–634. doi:10.2307/256657 Parker, J. D., Creque, R. E., Barnhart, D. L., Harris, J. I., Majeski, S. A., & Wood, L. M. (2004). Academic achievement in high school: Does emotional intelligence matter? Personality and Individual Differences, 37(7), 1321–1330. doi:10.1016/j.paid.2004.01.002 Peabody, D., & Goldberg, L. R. (1989). Some determinants of factor structures from personality trait descriptors. Journal of Personality and Social Psychology, 57, 552–567. doi:10.1037/00223514.57.3.552
312
Pennsylvania State University. (n.d.). The IPIP-NEO (International Personality Item Pool Representation of the NEO PI-R™). Retrieved November 26, 2007 from http://www.personal. psu.edu/~j5j/IPIP/ Petrides, K. V., Frederickson, N., & Furnham, A. (2004). The role of trait emotional intelligence in academic performance and deviant behavior at school. Personality and Individual Differences, 36, 277–293. doi:10.1016/S0191-8869(03)00084-9 Phillips, P., Abraham, C., & Bond, R. (2003). Personality, cognition, and university students’ examination performance. European Journal of Personality, 17, 435–448. doi:10.1002/per.488 Phoha, V. V. (1999). Can a course be taught entirely via email? Communications of the ACM, 42(9), 29–30. doi:10.1145/315762.315768 Piccoli, G., Ahmad, R., & Ives, B. (2001). Web based virtual learning environment: A research framework and a preliminary assessment of effectiveness in basic IT skills training. Management Information Systems Quarterly, 25(4), 401–426. doi:10.2307/3250989 Pittenger, D. J. (1993). The utility of the MyersBriggs type indicator. Review of Educational Research, 63, 467–488. Pucel, D. J., & Stertz, T. F. (2005). Effectiveness of and student satisfaction with Web-based compared to traditional in-service teacher education courses. Journal of Industrial Teacher Education, 42(1), 7–23. Robins, S., & Coulter, M. (2009). Management (10th ed.). Upper Saddle River, NJ: Prentice Hall. Rohrkemper, M. (1989). Self-regulated learning and academic achievement: A Vygotskian view. In Zimmerman, B. J., & Schunk, D. H. (Eds.), Selfregulated learning and academic achievement: Theory, research, and practice (pp. 143–167). New York, NY: Springer.
Student Personality and Learning Outcomes in E-Learning
Rothstein, M. G., Paunonen, S. V., Rush, J. C., & King, G. A. (1994). Personality and cognitive ability predictors of performance in graduate business school. Journal of Educational Psychology, 86, 516–530. doi:10.1037/0022-0663.86.4.516 Rothstein, M. G., Paunonen, S. V., Rush, J. C., & King, G. A. (1994). Personality and cognitive ability predictors of performance in graduate business school. Journal of Educational Psychology, 86, 516–530. doi:10.1037/0022-0663.86.4.516 Rovai, A. P., & Barnum, K. T. (2003). Online course effectiveness: An analysis of student interactions and perceptions of learning. Journal of Distance Education, 18(1), 57–73. Sadler, D. R. (1998). Formative assessment: Revisiting the territory. Assessment in Education, 5(1), 77–84. doi:10.1080/0969595980050104 Schniederjans, M., & Kim, E. B. (2005). Relationship of student undergraduate achievement and personality characteristics in a total Web-based environment: An empirical study. Decision Sciences Journal of Innovative Education, 3(2), 205–221. doi:10.1111/j.1540-4609.2005.00067.x Schrum, L., & Hong, S. (2002). Dimensions and strategies for online success: Voices from experienced educators. Journal of Asynchronous Learning Networks, 6(1). Schunk, D. H. (1986). Verbalization and children’s self-regulated learning. Contemporary Educational Psychology, 11, 347–369. doi:10.1016/0361476X(86)90030-5 Selim, H. M. (2007). Critical success factors for e-learning acceptance: Confirmatory factor models. Computers & Education, 49, 396–413. doi:10.1016/j.compedu.2005.09.004 Sorensen, J. B., & Stuart, T. E. (2000). Aging, obsolescence, and organizational innovation. Administrative Science Quarterly, (March): 81–112. doi:10.2307/2666980
Stellwagen, J. B. (2001). A challenge to the learning style advocates. Clearing House (Menasha, Wis.), 74(5), 265–268. doi:10.1080/00098650109599205 Tello, S. F. (2007). An analysis of student persistence in online education. International Journal of Information and Communication Technology Education, 3(3), 7–2. doi:10.4018/jicte.2007070105 Thorndike, E. L. (1920). Intelligence and its uses. Harper’s Magazine, 140, 227–235. Training Magazine. (2009). The 2009 training industry report - executive summary. Retrieved from http//ww.training.com Trapmann, S., Hell, B., Hirn, J. W., & Schuler, H. (2007). Meta-analysis of the relationship between the Big-Five and academic success at university. The Journal of Psychology, 215(2), 132–151. Volery, T., & Lord, D. (2000). Critical success factors in online education. International Journal of Educational Management, 14(5), 216–223. doi:10.1108/09513540010344731 Waschull, S. B. (2005). Predicting success in online psychology courses: Self-discipline and motivation. Teaching of Psychology, 32(3), 3. doi:10.1207/s15328023top3203_11 Watkins, D. (1998). Assessing approaches to learning: A cross-cultural perspective. In Dart, B., & Boulton-Lewis, G. (Eds.), Teaching and learning in higher education. Melbourne, Australia: The Australian Council for Educational Research. Wetzel, C. D., Radtke, P. H., & Stern, H. W. (1994). Instructional effectiveness of video media. Hillsdale, NJ: Lawrence Erlbaum Associates. Wolfe, R. N., & Johnson, S. D. (1995). Personality as a predictor of college performance. Educational and Psychological Measurement, 55, 177–185. doi:10.1177/0013164495055002002
313
Student Personality and Learning Outcomes in E-Learning
Zimmerman, B. J. (1986). Development of selfregulated learning: Which are the key subprocesses? Contemporary Educational Psychology, 16, 307–313. doi:10.1016/0361-476X(86)90027-5
Jeong, A. C. (2007). The effects of intellectual openness and gender on critical thinking processes in computer-supported collaborative argumentation. Journal of Distance Education, 22(1), 1–18.
Zimmerman, B. J. (1994). Dimensions of academic self-regulation: A conceptual framework for education. In Schunk, D. H., & Zimmerman, B. J. (Eds.), Self-regulation of learning and performance: Issues and educational applications (pp. 3–2l). Hillsdale, NJ: Erlbaum.
Landy, F. J., & Conte, J. M. (2009). Work in the 21st Century: An Introduction to Industrial and Organizational Psychology (3rd ed.). Hoboken, NJ: John Wiley & Sons.
Zimmerman, B. J., & Martinez-Pons, M. (1986). Development of a structured interview for assessing student use of self-regulated learning strategies. American Educational Research Journal, 23, 614–628.
ADDITIONAL READING Arbaugh, J. B., & Hornik, S. (2006). Do Chickering and Gamson’s Seven Principles Also Apply to Online MBAs? The Journal of Educators Online, 3(2), 1–18. Barrick, M., & Ann Marie Ryan, A. M. (Eds.). (2003). Personality and work: reconsidering the role of personality in organizations. John Wiley and Sons. CA: Wiley. Christy A. Crutsinger, C. A., Dee K. Knight, D. K., & Tammy Kinley, T. (2005). Learning style preferences: implications for web-based instruction. Clothing & Textiles ‘Research Journal, 23(4), 266-277 Furnhama, A., Jacksonb, C.J., & Millerc, T., (1999) Personality, learning style and work performance. Personality and Individual Differences, 27, 1113±1122 Harris, R., Dwyer, W. O., & Leeming, F. C. (2003). Are learning styles relevant in web-based instruction? Journal of Educational Computing Research, 29(1), 13–28. doi:10.2190/YHL4UP7P-K0GD-N5LJ 314
Lua, J., Yua, C. S., & Liub, C. (2003). Learning style, learning patterns, and learning performance in a WebCT-based MIS course. Information & Management, 40, 497–507. doi:10.1016/S03787206(02)00064-2 Maki, W. S., & Maki, R. H. (2002). Multimedia Comprehension Skill Predicts Differential Outcomes of Web-Based and Lecture Courses. Journal of Experimental Psychology. Applied, 8(2), 85–98. doi:10.1037/1076-898X.8.2.85 Maki, W. S., & Maki, R. H. (2003). Prediction of learning and satisfaction in web-based and lecture courses. Journal of Educational Computing Research, 28(3), 197–219. doi:10.2190/DXJU7HGJ-1RVP-Q5F2 McCrae, R. R., & Allik, J. (Eds.). (2002). The Five-Factor Model of Personality Across Cultures. NY, NY: Springer. McCrae, R. R., & John, O. P. (1992). An introduction to the five-factor model and its applications. Journal of Personality, 60, 175–215. doi:10.1111/j.1467-6494.1992.tb00970.x Neuhauser, C. (2002). Learning Style and Effectiveness of Online and Face-to-Face Instruction. American Journal of Distance Education, 16(2), 99–113. doi:10.1207/S15389286AJDE1602_4 Oliver, P. John, O.P., Robins, R.W., & Pervin, L.A., (Eds). (2008). Handbook of Personality, Third Edition: Theory and Research. The Guilford Press, NY, NY
Student Personality and Learning Outcomes in E-Learning
Overbaugh, R. C., & Lin, S. Y. (2006). Student Characteristics, Sense of Community, and Cognitive Achievement in Web-based and Lab-based Learning Environments. Journal of Research on Technology in Education, 39(2), 205–223.
Wiggins, J. S. (Ed.). (1996). The Five-Factor Model of Personality: Theoretical Perspectives. NY, NY: The Guilford Press.
Penger, S., Tekavčič, S. M., & Dimovski, S. V. (2008). Comparison, validation and implications of learning style theories in higher education in slovenia: an experiential and theoretical case. International Business & Economics Research Journal, 7(12), 25–44.
KEY TERMS AND DEFINITIONS
Renee E. DeRouin, R.E., Fritzsche, B.A., & Salas, E. (2005). E-Learning in Organizations. lournal of Management, 31(6), 920-940 Tzeng, G. H., Chiang, C. H., & Li, C. W. (2007). Evaluating intertwined effects in e-learning programs: A novel hybrid MCDM model based on factor analysis and DEMATEL. Expert Systems with Applications, 32, 1028–1044. doi:10.1016/j. eswa.2006.02.004 Vacha-Haase, T., & Thompson, B. (2002). Alternative ways of measuring counselees’ Jungian psychological-type preferences. Journal of Counseling and Development, 80(2), 173–179. Wang, K. H., Wang, T. H., Wang, W. L., & Huang, S. C. (2006). Learning styles and formative assessment strategy: enhancing student achievement in Web-based learning. Journal of Computer Assisted Learning, 22, 207–217. doi:10.1111/j.13652729.2006.00166.x
Agreeableness: A tendency to be goodnatured, cooperative, tolerant, courteous, helpful, trusting, and forgiving. Big-Five Model: Five factor model of personality that includes Extraversion, Agreeableness, Emotional Stability, Conscientiousness, and Openness to Experience. Conscientiousness: A tendency to be responsible, hardworking, dependable, persistant, and achievement oriented, persistent. Emotional Stability: A tendency to be calm, enthusiastic, to maintain an even temperament, and self-confidence. Extraversion: A tendency to be sociable, gregarious, talkative, assertive, adventurous, active, assertive, and ambitious. Openness to Experience: A tendency to be imaginative, cultured, curious, broadminded, artistically sensitive, and intelligent. Personality: A combination of psychologixcal traits that classifies of a person.
315
316
Chapter 14
A Method for Adapting Learning Objects to Students’ Preferences Ana Sanz Esteban University Carlos III of Madrid, Spain Javier Saldaña Ramos University Carlos III of Madrid, Spain Antonio de Amescua Seco University Carlos III of Madrid, Spain
ABSTRACT The development of information and communications technologies (ICT) in recent years has led to new forms of education, and consequently, e-learning systems. Several learning theories and styles define learning in different ways. This chapter analyzes these different learning theories and styles, as well as the main standards for creating contents with the goal of developing a proposal for structuring courses and organizing material which best fits students’ needs, in order to increase motivation and improve the learning process.
INTRODUCTION Throughout history, important technological breakthroughs have completely altered society; new technologies have replaced the previous ones even though they had provided support for a long time. The invention of the printing press by Gutenberg replaced the manuscript. Printing led to the dissemination of information in the DOI: 10.4018/978-1-60960-615-2.ch014
form of books and created interest in literacy, thus encouraging schooling. Then came the telegraph followed by the telephone, radio, the cinema, etc. until the present day Web. Currently, the Web represents the most relevant technology in communication. It can be described as the key component that has revolutionized and popularized the use of the Internet because it is an open, flexible and very simple communication technology and broadcast medium. This has resulted in a wide range of applications such as
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
A Method for Adapting Learning Objects to Students’ Preferences
electronic commerce, electronic banking or online entertainment systems. New technologies have invaded our daily lives. In the education sector these technologies provided an excellent means of breaking the geographical and temporal boundaries of traditional teaching and learning, revolutionizing, and at the made time changing, the concept of distance education at the same time. Thus, the Web has become a basic infrastructure for developing distance teaching-learning processes, resulting in a model known as e-learning. This has led to one of the most important revolutions in the way knowledge is transferred because the computer has become an interactive and easily accessible learning environment. Knowledge is becoming a valuable resource. Recently, there has been an increasing interest in problems related to knowledge acquisition, especially to the economy and, particularly economic growth. It has been shown that investments in activities and knowledge-based resources are important for the competitiveness of economies. However, knowledge is not something that has emerged in our time; knowledge dates back to the beginning of creation, and thanks to certain kinds of knowledge such as the invention of tools, how and where to hunt, how to grow crops we have been able to progress and evolve. Thus, knowledge, information and technological changes have always been crucial to economic growth. The growth of Information and Communications Technologies (ICT) has allowed individuals and companies greater access to information and markets. They have consolidated knowledge as a new factor in production because knowledge acquisition determines the ability to innovate in an environment where access to information is increasing. In relation to this, new concepts such as “knowledge management” or “intellectual capital” have emerged and others, being the underlying idea of these concepts, for example high levels of knowledge, skills and competencies,
and they are critical to the success of enterprises and economies. Therefore, in our economic environment, knowledge is an essential element in the information economy. From a business perspective, and in a broader sense public administration, efforts have been made to incorporate knowledge as an asset capable of creating a competitive advantage and serving clients more effectively. Within this framework, training is an important factor for the society development. If knowledge is a very important value for the company, employees need to be highly trained and the way to acquire knowledge is training. Formal, continuous and occupational training are on the increase in Spain, and in neighboring countries. Formal training includes studies leading to a diploma (vocational training) or higher (a Bachelor’s, a master’s or a doctorate. Continuous training is for employees who want or need to improve their knowledge and skills. So, this kind of training is aimed at people who have a professional activity and its objective is for the worker to improve the way in which he works or to promote his professional career. Occupational training includes all actions designed to prepare the unemployed in order to prepare them for do a particular job. To a greater or lesser extent, these three areas of training are demanding new educational models, adapted to the growing needs of an increasingly global and interdependent society. Learning is a complex process where student motivation, teacher, learning material and several other aspects interact. The traditional classroom teaching is changing more and more into a virtual environment where different issues about learning have to be taken into account. The basic learning concepts remain the same regardless of where studying takes place. Students have to be motivated in the learning situation and the material has to be readily available. Learning has dramatically changed over recent decades when technical revolution has brought different opportunities to learn via the Internet (Kanninen, 2009).
317
A Method for Adapting Learning Objects to Students’ Preferences
We can identify three types of training: face-toface teaching, open learning and online learning. Because of its age and as the main of education trend throughout the world, the first educational model was face-to-face teaching. Its major component is the presence of both student and teacher in the same physical space at the same time. Open learning has been developed in recent years to allow students greater flexibility to study at their own pace, using the materials available. It does not require the presence of the student or teacher in the same physical place and at the same time; learning is done at home, using books, audiovisuals (videos, CDs, films) and other teaching materials sent by post. This educational model eliminates geographical and time barriers, but it lacks the figure of the tutor whose role is relevant within the teaching-learning process. Technological advances have allowed education to move one step forward. Thanks to these advances, a new educational model: online teaching, based on elearning, is possible. E-learning is a concept that integrates the use of technology and the teaching items or resources to design and develop training courses for online teaching. The main advantage is that students and teachers can collaborate and interact synchronously and asynchronously through a virtual space, without the need to meet physically in the same place and at the same time. One should not lose sight of the pedagogical aspect. Apart from the influence of technology in education, pedagogy plays a really important role in the teaching and learning process. If it is necessary to take this into account in face-to-face teaching, it is indispensable in online teaching. Thus e-learning is a concept that integrates the use of technology and the teaching items or resources to achieve the design and development of training courses for online teaching. Its main advantage is the possibility of attending class remotely and collaborating with others through a virtual space without the need to meet all physically in the same place and at the same time.
318
Designing and building an e-learning system is not an easy task. At first glance, it might seem that the most important aspect is the implementation of the technological platform that provides the best mechanisms and facilities to allow teachers and students to interact over distance. And the other tasks consist of selecting the contents, implementing them in a digital format and publishing them on the e-learning platform. However, as we will see throughout this chapter, this idea is erroneous. Technology plays an important role because we need the appropriate support that allows mechanisms to share resources and knowledge, and teacher and students to interact, is needed. But having the best e-learning platform does not ensure learning since one of the main issues of the teaching-learning process is the contents and their quality. So, the development of contents for an e-learning system entails carrying out a set of tasks which are not trivial. The preparation of didactic material for courses needs to be studied carefully; we should take into account the different pedagogical factors which influence greatly quality. When the tutor prepares the content and classes for a face-to-face course, great care in taken in selecting material and preparing classes. Similarly, when the tutor of an online course prepares the content, he should always bear in mind that there is no direct and physical contact between teacher and student and that the material provided has to be better than that of a face-to-face course. The use of a technological platform as a support implies that the contents are digital. Therefore, the most appropriate format and technology tools should be selected, taking into account the different factors that affect learning. Not all students learn in the same way, so depending on their learning style, one format will be more desirable than another. This is why a course should not use have a single digital format, but provide the same content in different digital formats (text, images, videos, sounds, etc) to meet fit learning style of each student.
A Method for Adapting Learning Objects to Students’ Preferences
Because of the popularity of e-learning systems, the main objective of this chapter is to analyze learning characteristics or factors of students, in order to define a possible structure for e-learning courses and how to organize its contents to motivate and enhance performance. Section 2 analyzes different learning theories and the different learning styles found in the literature. It also presents some of the standards on contents. Section three analyzes the role of teachers in the teaching-learning process and proposes a teaching-learning method, describes the student learning model and presents a method for structuring courses and organizing material adapted to students’ needs. Section four explains the future trends and section five presents the conclusions of this work.
LITERATURE REVIEW E-learning has emerged as a means of education for people who cannot attend traditional classes but that want to improve their education. Moreover, many organizations use e-learning to train their personnel, reducing travel costs, delays in projects and others are avoided. But to make an e-learning course effective so that students follow it and achieve their goals, students have to be highly motivated, teachers have to create the appropriate contents and the right technology has to be available. We need to know and understand the most relevant learning theories and styles to adapt contents to student needs in order to motivate them and foment learning. Moreover, students have different learning styles, and their motivation, strengths and preferences in the way they process information are also different. Therefore, the teaching – learning courses should be designed to accommodate these. Other relevant factors that influence motivation and student learning process are the organization and structure of the course. It is essential that these contents are organized and structured cor-
rectly because disorder can make students feel lost and, therefore, unmotivated, increasing the chances of their abandoning the course early. We have included a section which presents different ways of structuring and organizing the contents.
Learning Theories Learning theories and styles define learning in different ways. Learning theories help to understand and predict human behavior, from the point of view of how knowledge is acquired Their goal is to define theoretical models that specify the human learning process, considering that these theories do not provide a solution; they only draw our attention to those variables which are essential to this. There are different theories which have had an impact on learning studies over time. The two most widespread are constructivism and behaviorism. Before the 1960’s, behaviorism was the leading learning theory, but since then constructivism has become the most influential. As we will see later, some other theories have been developed basing on it, such as sociocultural learning or cooperative learning.
BEHAVIORISM The basic idea of behaviorism is that a person learns when reacting to a stimulus. It is a learning concept where learning is defined as a change in behavior. Teacher activity and the learning material itself have been emphasized. Behaviorism had a great impact on teaching in those days. The essential principles of this theory are systematic preplanning, strict definition of the learning goals and evaluating learning in relation to the goals.
OBJECTIVISM The objectivism is based on Skinner’s stimulusresponse theory: learning is a change in the behav-
319
A Method for Adapting Learning Objects to Students’ Preferences
ioral disposition of an organism (Jonassen, 1993) that can be shaped by selective reinforcement. The source of the model is that there is an objective reality and the goal of learning is to understand this reality and modify behavior accordingly. In terms of instruction, the objectivist model assumes that the goal of teaching is to transmit knowledge efficiently from the expert to the learner. Instructions structure reality into abstract or generalized representations that can be transferred and then recalled by students (Yarusso, 1992).
CONSTRUCTIVISM Constructivism denies the existence of an external reality independent of each individual’s mind. Rather than transmitted, knowledge is created, or constructed by each learner. The mind is not a tool for reproducing the external reality, but rather the mind produces its own, unique conception of events (Jonassen, 1993). Constructivist model calls for learner-centered instruction: individuals are assumed to learn better when they are forced to discover things themselves rather than when they are instructed. The learner must have experience with hypothesizing and predicting, manipulating objects, posing questions, researching answer, imaging, investigating, in order for knowledge construction to occur.
COOPERATIVE LEARNING In constructivism, learning is assumed to occur when an individual interacts with objects whereas in cooperativism learning results from interaction between individuals (Slavin, 1990). On the contrary, learning occurs as individuals exercise, verify, solidify, and improve their mental models through discussion and information sharing. While instructor-led communication is inherently linear, collaborative groups allow more branching and concentricity. In addition to sharing the pedagogi-
320
cal assumptions of constructivism, collaborationists also assume that knowledge is created as it is shared, and the more it is shared, the more learning takes place. The second pedagogical assumption is that learners have prior knowledge they can contribute to the discussion. The third assumption is that participation is critical to learning. The fourth assumption is that learners will participate if given optimal conditions, such as small groups to work interactively.
SOCIOCULTURAL LEARNING Whereas collaborativism and the cognitive information processing model are extensions of constructivism, the sociocultural model is both an extension of and a reaction against some assumptions of constructivism. In fact, the sociocultural model embraces the concept that there is no one external reality. They argue that constructivism and collaborativism force the cultural minority into adopting the understanding derived by the majority. The major assumption of socioculturalism is that middle-class Anglo-American male has prevented a genuinely emancipated environment in which students begin to construct meaning on their own terms and in their own interests (O’Loughlin, 1992). One major implication of the sociocultural model is that students participate on their own terms. Instruction should not deliver a single interpretation of reality nor a culturally biased interpretation of reality.
COGNITIVE INFORMATION PROCESSING The cognitive information processing model argues that learning involves processing instructional input to develop, test, and refine mental models in long-term memory until they are effective and reliable enough in problem-solving situations (Schuell, 1986). The frequency and
A Method for Adapting Learning Objects to Students’ Preferences
intensity with which a student cognitively processes instructional input controls the pace of learning. Instructional inputs that are unnoticed, or unprocessed, by learners cannot have any impact on mental models. The cognitive information processing model assumes that learners differ in terms of their preferred learning style. Instructional methods that match an individual’s learning style will be the most effective. The second assumption is that an individual’s prior knowledge is represented by a mental model in memory and that the mental model is an important determinant of how effectively the learner will process new information. The third assumption is that given a learner’s limited information processing capacity, attention is selective. Selective attention is an interrelated function of the display, the cognitive structure of the learner, the prior experience of the learner. Preinstructional methods such as topic outlines and learning goals might improve learning because they direct attention. Biggs (Biggs 1993) has shown that students who consider learning as better understanding reality are more likely to adopt a deeper approach.
Learning Styles Every e-learning system has a basic component: students. The main objective of teaching is student learning. An e-learning system is useless regardless of how good it is, if students do not learn. So, when course is being developed, it is necessary to analyze how people learn.
HOW STUDENTS LEARN? Both from the teacher’s and student’s points of view the concept of learning is especially appealing, because it offers a wide range of possibilities for more effective teaching. Students have different strengths and preferences in the way they take in and process information, in other words they have different learning styles. Some prefer
to work with abstractions such as theories or mathematical models, whereas others are more comfortable with concrete information as facts or experimental data. In the same way, our learning style is influenced by several factors, the way in which we select and represent information being one of the most influential. We also pay attention to information depending on our interest and how we receive this information. Most people tend to pay attention to visual information more than to information perceived by the other senses. So, we have to be conscious of how people mentally represent information. •
•
•
System of visual representation: images are remembered, for example the page of a book is visualized (pictures, computer graphics, text, etc.). System of auditory representation: Information is remembered from sounds. (It is necessary to listen to a mental recording step by step, a sound, etc.) System of kinesthetic representation: Information is remembered from the other senses, associating it with feelings, smell, movements, etc.
Each system of representation has its own characteristics and is more efficient in some situations than in others. So, the student’s behavior will depend on what system of representation is favored. In order to promote student learning, teachers should organize work, considering the different types of learners, to favor their learning. In the case of e-learning systems in which learning is individual, the three systems should be considered so that the material is organized individually according to the student profile. Moreover, student motivation is an important factor. It activates the learner to target-oriented actions which lead to achieving some learning objectives (Kanninen, 2007). Self-confidence has an important role in studying and learning. Without a realistic view of oneself and sufficient self-confidence, learning
321
A Method for Adapting Learning Objects to Students’ Preferences
is very hard or impossible. Learners also work at different paces. Some are comfortable with the existing pace at university but others find it difficult to keep up. Students have specific needs that have to be satisfied, and it is therefore necessary to consider the different learning styles mentioned at the beginning of this section. Individual requirements, student interests and a variety of learning styles can be supported through several software tools. Emergent communication and information technologies facilitate the dissemination of learning resources and allow to access information in a continuous way in real time or in a flexible way, depending on the time available. In these learning environments, different ways in which students obtain and process the information must be considered, in order to offer them educational contents dynamically adapted to their special features of learning.
Major Learning Style Models Learning styles are important to develop quality education, and so, they have constituted the base for research in the last few years. Studies have revealed that learning depends on several personal factors, which in practice, means that every individual has his own style. This style can change with time and depends on the educational tasks. So learning is a dynamic process of adaptation. When the learning styles of most students in a class and the teaching style of the professor are mismatched, then students become bored, uncomfortable or are inattentive in class. They are not motivated, and in many cases drop out or school or at least switch to another course. So, it is important that teachers know how to teach each student and to motivate everyone so that they can pass and pass the tests. This section describes the most widespread learning styles and their specific characteristics.
322
•
Canfields Learning Styles Inventory (Candfield, 1992): The Canfields Learning Styles Inventory (CLSI) is a 30-item assessment using a 4-point rank order procedure for each item. Students ranked these choices in the order that best described their preferences or reactions. A ranking process was used to obtain the raw scores. Thus, the lower the score, the stronger the preference. Ranking of the four responses on each item equates to six paired comparison items in which the student chooses one item from each pair. For example: Peer, Organization, Goal Setting, and Competition each are ranked on a total of 6 items within the inventory. The CLSI has 21 subscale variables that are grouped into four major categories: ◦⊦ Conditions for Learning: (Peer, Organization, Goal Setting, Competition, Instructor, Detail, Independence, Authority) constitutes about two-fifths of the items in the inventory. These items, phrased in typical classroom situations, are designed to measure student motivational qualities. These motivational areas center on affiliation, structure, eminence, and achievement. ◦⊦ Area of Interest: (Numeric, Qualitative, Inanimate, People) measures students’ preferred subject matter or objects of study. ◦⊦ Mode of Learning: (Listening, Reading, Iconic, Direct Experience) concentrates on identifying the specific modality through which students learn best. ◦⊦ Expectation for Course Grade (A, B, C, D, and Total Expectation) is designed to predict the failure or success of a learner. The A to D Expectation scales reflects the level of performance anticipated.
A Method for Adapting Learning Objects to Students’ Preferences
•
•
Dunn and Dunn Learning Style Inventory (Dunn and Dunn, 1999): In the Dunn and Dunn Learning Style, learning strategies are included in the methods through which teachers teach and/or learners learn. Methods and strategies which match the different types of learners are, for example, contract activity packages (CAP), program learning sequences (PLS) and multi-sensory instructional packages (MIP). CAP is an instructional strategy that allows motivated people to learn at their own speed, with their best perceptual strength and reporting their knowledge the best way. PLS is a method for individualizing instructions. The content can be learned in small steps without direct supervision. The objectives range from simple to complex ones. MIP present and review the content through visual, auditory, tactual and/or kinesthetic instructional strategies. This is a self-contained teaching resource that enables students to master a set of objectives by beginning with their strongest perceptual modality and reinforcing learning with their secondary or tertiary strength. While the Dunn and Dunn model can be criticized as regards its broad basis and its assessment inventories, the attempt to concretize and systematically apply knowledge about learning styles and strategies gives students, teacher and parents a pragmatic means of individualizing education. Honey and Mumford’s learning style model (Honey and Mumford 1992): Honey and Mumford’s learning style model was developed in an attempt to apply learning style theory in the context of business and management studies. Based around the Kolb’s theory, this model identifies four types of students: reflector, theorist, pragmatist and activist (Zwanenberg et al. 2000).
Reflector: Observe and describe processes, try to predict outcomes and focuses on reflecting and trying to understand meaning. ◦⊦ Theorist: Focus on ideas, logic and systematic planning, but are mistrustful of intuition and emotional involvement. ◦⊦ Pragmatist: Like practicality, downto-earth approaches, group work, debate and risk taking, but tend to avoid reflection and deep level of understanding. ◦⊦ Activist: Individuals who enjoy new experiences, are active, tend to make decisions intuitively, but dislike structured procedures. Felder-Silverman Learning Style Model (Felder and Silverman, 1988): In 1988 Richard Felder and Linda Silverman formulated a learning style model designed to capture the most important learning style differences among engineering students and provide a good basis for engineering instructors to formulate a teaching approach that addresses the learning needs of all students (Felder and Silverman, 1988). The model classifies students according to the following four dimensions: ◦⊦ Sensitive, if they are concrete thinkers, practical, oriented toward facts and procedures; or intuitive, if they are abstract thinkers, innovative, oriented toward theories and underlying meanings). ◦⊦ Visual, if they prefer visual representations or presented materials such as pictures or diagrams; or verbal, when they prefer written and spoken explanations. ◦⊦ Active, if they learn by trying things out or enjoy working in groups; or reflective, if they learn by thinking ◦⊦
•
323
A Method for Adapting Learning Objects to Students’ Preferences
things through, prefer working alone or with a single familiar partner. Sequential, if they have a linear thinking process or learn in small incremental steps; or global, when they have a holistic thinking process or learn in large leaps.
◦⊦
Each of the defined dimensions has parallels in other models, but the combination of them is unique to this one. In order to assess preferences in the four dimensions of the Felder-Silverman model a 44-question instrument was designed. This on-line questionnaire, known as the Index of Learning Styles (ILS), returns a profile with scores on all four dimensions, brief explanations of their meanings and references that provide more detail about how the scores should and should not be interpreted.
◦⊦
◦⊦
•
•
324
Keefe’s Learning Style Profile (Keffe, 1986): The Learning Style Profile (LSP) was designed to give teachers an easy way to determine learning styles in middle level and senior high school students. The LSP diagnoses students’ cognitive styles, perceptual response tendencies, and study instructional preferences by means of 23 variables. As a first-level diagnostic, the LSP can be used to create individual student profiles or group profiles that are useful in creating learning style-based instruction. So, the learning style is defined as the composition of all these elements and the determine how students perceive, interact and respond into the learning environment. Kolb’s Experientiel Learning Model (Kolb, 1984): In this model students are classified as having a preference for concrete experience or abstract conceptualization (how they take information in), and active experimentation or reflective observation (how they process information).
◦⊦
◦⊦
Type 1: Type 1 learners respond well to explanations of how course material relates to their experience, interests and future careers. Type 2: Type 2 learners respond to information presented in an organized, logical fashion and benefit if they are given time for reflection. Type 3: Type 3 learners respond to having opportunities to work actively on well-defined task and to learn by trial-and-error in an environment that allows them to fail safely. Type 4: Type 4 learners like applying course material in new situations to solve real problems.
Preferences on this scale are assessed with the Learning Style Inventory (Atkinson, 1991) or the Learning Type Measurement (LTM, 2004). •
The Myers-Briggs Type Indicator (MBTI) (Pittenger, 1993): According to students’ preferences this indicator categorizes people in four scales derived from Jungs’s Theory of Psychological types: ◦⊦ Extraverts, if they try things out and focus on the outer world of people; or introverts, if they think things through and focus on the inner world of ideas. ◦⊦ Sensors, if they are practical, detail oriented and focus on facts and procedures; or intuitors, if they are imaginative, concept oriented and focus on meanings and possibilities. ◦⊦ Thinkers, if they are skeptical an tend to make decisions based on logic and rules; or feelers, if they are appreciative and tend to make decisions based on personal and humanistic considerations. ◦⊦ Judgers, if they set and follow agendas and seek closure even with in-
A Method for Adapting Learning Objects to Students’ Preferences
complete data; or perceivers, if they adapt to changing circumstances and postpone reaching closure to obtain more data.
•
Lawrence (Lawrence, 1993) characterizes the preferences, strengths, and weaknesses of each or the sixteen MBTI types in many areas of student functioning and offers numerous suggestions for addressing the learning needs of students of all types.
LEARNING OBJECT CONTENT MODELS In this section, seven content models are presented and discussed in order to identify the weaknesses and strengths of each one. Models defined by some of the major players in the e-Learning field are presented first, followed by models that were developed for academic purposes.
LEARNATIVITY CONTENT MODEL The Learnativity foundation has developed a content model that provides a comprehensive description of granularity (Wagner 2002). This model defines a five level content hierarchy. •
•
Raw Media Elements: These are data and media elements and they make up the smallest level in the model and relate to content elements that reside at a pure data level. Examples of these elements include a single sentence or paragraph, images, or animations. Information Objects: An information object combines raw data and media elements, they are sets of raw media elements, and focuses on a single piece of information. Such content might explain a concept, illustrate a principle, or describe a process.
•
•
Exercises are often considered to be information objects. Application Objects: Based on a single objective, information objects are assembled into the third level of application objects. At this level, learning objects reside in a more restricted sense than the aforementioned definition of the LOM standard suggests (Duval and Hodgins 2003). Learning objects are a collection of information objects and relate to a single learning objective. Aggregate Assemblies: Aggregate assemblies constitute the fourth level and they deal with larger (terminal) objectives. This level corresponds with lessons or chapters. Collections: Lessons or chapters defined at the previous level can be assembled into larger collections known as courses and curricula and they constitute the fifth level.
The model has gained considerable acceptance in both training and education communities. It is used as a basis for a model defined in the Reusable Learning Project (Reusable Learning 2009) and has been adopted by the NLII Learning Object Virtual Community of Practice, which is now known as the Educause learning initiative (ELI) (ELI 2009).
SCORM CONTENT AGGREGATION MODEL The most widely implemented set of specifications, intended to allow learning content to be developed independently of a particular delivery platform, is the Sharable Content Object Reference Model (SCORM) (SCORM 2009), a collection of specifications and standards that is documented and maintained by the advanced distributed learning initiative (ADL 2009). SCORM includes a content aggregation model that features Assets,
325
A Method for Adapting Learning Objects to Students’ Preferences
Sharable Content Objects (SCO), Activities and Content Aggregations. •
•
•
•
Assets: Assets are an electronic representation of media, text, images, audio, web pages or other data that can be presented in a web client. Sharable Objects Contents: SCOs are self-contained learning objects or learning components that meet additional technical requirements needed for interoperability with learning delivery platforms. To improve reusability, an SCO should be independent of its learning context. For example, an SCO could be reused in different learning experiences to fulfill different learning objectives. Activities: An activity aggregates SCOs and assets to form a higher level unit of instruction that fulfills higher level learning objectives. Content Aggregations: A Content Aggregation is a map (content structure) that can be used to aggregate learning resources in a well integrated unit of education (for example course, chapter, module, etc.).
SCORM uses a technical specification developed by the IMS Global Learning Consortium to define the format for content aggregations. One of the aims of SCORM was to enable an “object based” economy for learning objects that could be shared and reused across the Department of Defense (DoD).
NAVY CONTENT MODEL (NCOM) The Integrated Learning Environment (ILE) is a strategic initiative and current execution effort that encompasses all forms of training methods including instructor-led, facilitated, and computer-based instruction. The Navy has refined the SCORM
326
content model, providing more specific content definitions for granularity levels that are identified as critical for the Navy Interactive Learning Environment (Navy ILE 2009). As the model builds upon SCORM, Navy content is SCORM compliant. The Navy content model distinguishes between learning object aggregations (LOAs), terminal learning objects (TLOs), enabling learning objects (ELOs), and assets. • •
•
•
LOA: is the top-level grouping of related content, containing TLOs and ELOs. TLO: is an aggregation of one or more ELOs. A TLO satisfies one terminal objective and correlates to an SCORM activity. Terminal learning objectives are usually associated with lessons. ELO: is an aggregation of one or more assets. An ELO satisfies one enabling objective and correlates to an SCORM SCO. Examples include illustrations and exercises. Asset: is a single text element or a single media element (e.g. an assessment object, a video, and other data elements).
A terminal objective is a major objective for a topic or task, describing the overall learning outcome. An enabling objective supports a terminal objective. Such an objective describes specific behaviors that must be learned or performed. The Navy content model uses SCORM as its foundation, so the relationship existing between them is presented below. • • • •
LOA (NCOM) ⇒ Content aggregation (SCORM). TLO (NCOM) ⇒ Activity (SCORM). ELO (NCOM) ⇒ Sharable Content Object (SCO) (SCORM). Asset (NCOM) ⇒ Asset with metadata (SCORM).
A Method for Adapting Learning Objects to Students’ Preferences
CISCO RLO/RIO MODEL
DLCMS COMPONENT MODEL
Cisco Systems, Inc. (Barrit et al. 1999) has also adopted an object-based strategy for developing and delivering learning content. Cisco defines lessons as reusable learning objects (RLOs) and topics of the lesson, as reusable information objects (RIOs). Reusable information objects relate to a single learning objective and contain content, practice items and assessment items. A practice item is an activity that gives the learner the ability to apply its knowledge and skills, like a case study or a practice test. An assessment item is a question or measurable activity used to determine if the learner has mastered the learning objective for a given RIO. Cisco further classifies each RIO as a concept, fact, procedure, process, or principle. To build a lesson or RLO, 7 ± 2 RIOs are grouped together with an overview and summary. The RLO-RIO strategy provides detailed guidelines to build RLOs and RIOs. For RIO types, and RLO overviews and summaries, the guidelines describe which content items are required and which may be used optionally. An RIO can function as an independent learning component that can be called up by a learner who needs a specific piece of information. Or a learner can summon an RLO for a more in-depth learning experience. RLOs can be sequenced to create a course on a particular subject. And RIOs can be combined together to build custom RLOs that meet the needs of individual learners. When learners want to take a “lesson” or reference a “job aid”, they request the raw items that make up the RLO and RIO from the digital library. Format and style sheets are then applied to the objects as they are packaged and delivered to the learner’s Web browser. Because RLOs and RIOs are stored free of format and style, they can be packaged using style sheets and templates specific for instructor-led training delivery.
The dynamic learning content management system (dLCMS) project (Schluep et al. 2005) aims to provide a modularization strategy combined with structured markup to enhance the reusability of learning content. A component model is included that defines three aggregation levels: assets, content elements and learning units. •
•
Assets: These are media elements, such as images, videos, animations, or simulations. They are binary data objects, which cannot easily be divided into smaller components. They contain pictorial or auditory information, which can be static (image and graph) or dynamic (video, audio and animation). Content elements: These are defined as small, modular pieces of learning content, which: ◦⊦ serve as basic building blocks of learning content, ◦⊦ can be aggregated into larger, didactically sound learning units, ◦⊦ are self-contained, ◦⊦ are based on a single didactic content type, ◦⊦ are reusable in multiple instructional contexts, ◦⊦ may contain assets.
Examples include exercises, experiments, questionnaires and summaries. •
Learning units: These are defined as aggregations of content elements, which are presented to the learner. Typically, a learning unit serves as an online lesson and may be used to teach several learning objectives. A learning unit provides a way to define a chapter-like, hierarchical structure of nodes. Each node will be associated with a content element through reference. The content elements are not copied into the
327
A Method for Adapting Learning Objects to Students’ Preferences
learning unit, but are referenced by links. The component model does not define any further levels for the aggregation of learning units. The dLCMS model provides a well-defined hierarchy of learning object content: Assets are assembled into content elements and content elements are assembled into learning units. Learning units may be of any size and may be used for multiple learning objectives. dLCMS does not define a learning object level that relates to a single learning objective.
NETG LEARNING OBJECT MODEL NETg (L’Allier 1997), the National Education Training Group, is a Thomson Learning Company and worldwide leader in blended learning solutions. In NETg, a course is structured as a matrix divided into three major components: units (the vertical), lessons (the horizontal) and topics (the cells). Each unit, lesson and topic in this structure is partially defined by its relationship to the other components. • • • •
Course: It contains independent units. Unit: It contains independent lessons. Lesson: It contains independent topics. Topic: Contains a single objective, a learning activity and an assessment.
A topic is known as an NLO (NETg learning object), which is defined as the smallest independent instructional experience that contains an objective, a learning activity and an assessment that measures the learning objective. A learning objective is a single measurable or verifiable step on the way to a learning objective. Learning objectives establish what a learner is expected to do or learn and how an acceptable level of achievement will be verified. NETg is a member of the IMS Global Learning Consortium and has assembled
328
its own group of learning management system (LMS) developers whose systems are being designed to work with the NLO architecture. When the learner needs a piece of information, she can navigate to the digital library, type in a request, and get relevant NLOs. If the learner needs a full course on a subject, the system will build a course based on the NLOs needed.
SEMANTIC LEARNING MODEL (SLM) The semantic learning model is aimed at supporting decomposition of learning objects and has been developed for academic purposes (Fernandes et al. 2005). The model is defined under six categories: •
•
•
•
•
•
Asset: It is the lowest granularity level. Assets can be pictures, illustrations, diagrams, audio and video files, animations, and text fragments. Pedagogical information: It is defined as “a group of assets that express the same meaning”. An example is a figure associated with a comment. Pedagogical entity: It is defined as a pedagogical information component, associated with a pedagogical role. Four roles are defined: concept, argument, solved problem and simple text. Pedagogical context: It is defined as a semantic structure or network in which pedagogical entities are grouped. Pedagogical document: It contains a pedagogical context, associated with prerequisites. Pedagogical schema: It is a group of many pedagogical documents made to elaborate a curriculum.
From a content perspective, four aggregation levels are defined. A pedagogical entity and a pedagogical document represent respectively a single pedagogical information component and a
A Method for Adapting Learning Objects to Students’ Preferences
single pedagogical context. Pedagogical roles and prerequisites are added as metadata. According to the authors of the model, an asset correlates to a Learnativity raw data and media element, a pedagogical information component to a Learnativity information object, a pedagogical entity to a Learnativity application object, a pedagogical context to a Learnativity aggregate assembly and a pedagogical document to a Learnativity collection.
DISCUSSION The NETg learning object model consists of four levels. Three levels are defined for the aggregation of learning objects, or topics, but it only provides an abstract definition of their content, whereas no learning object components are specified. The other models define learning object components in one or two levels. SCORM, NCOM and Cisco define only one level. Cisco describes the content types of this level conceptually, but no specification is given from a technical point of view. These models seem to agree that this level consists of individual, reusable, resources. Moreover, in SCORM, assets can be aggregated with other assets. The Learnativity, SLM and dLCMS models define a second level for learning object components that aggregates first level components. These models define this component level as an aggregation of assets that focus on a single piece of information, but not necessarily relating to a specific learning objective. The dLCMS model defines learning objects as aggregations that relate to one or more learning objectives. The other content models define learning objects consistently as content aggregations that relate to a single learning objective. These models define aggregations of learning objects into an additional level that relates to multiple or larger learning objectives, so lessons are commonly associated with this aggregation level.
Learnativity, NCOM and SCORM define a third aggregation level for learning objects. Finally, the NETg model identifies a content hierarchy for this granularity level (unit, course and learning unit, course, and curriculum, respectively).
A PROPOSAL FOR STRUCTURING AND ORGANIZING CONTENTS ACCORDING TO STUDENTS PREFERENCES E-learning and classroom education differ in many ways. However, they share the three basic components of any educational system: teachers, students, and knowledge. There are several interactions among these items that are relevant in the teaching-learning process when it is carried out in a distance context (Anderson, 2003). The most important item is the student, so all the efforts the teacher makes and the contents through which the teacher wants to transfer knowledge should be aimed at achieving student learning. We usually think that all students have the same characteristics, independently of the learning process (online or face-to-face); however many studies (Cabero, 2006) have shown that there are a set of distinctive characteristics that influence student learning. These include: motivation, independence and self-reliance as a student. It has been shown that introverted students are more successful in an e-learning context; the self-direction and self-efficacy are important for student satisfaction in this type of teaching; the ability and preference for active learning determine the learning which the students do in hypermedia contexts; and self-regulation is a significant variable. It is necessary that the student control certain techniques of intellectual work, especially relating to independent study and implementing activities supported in collaborative work. In short, e-learning students should have a range of skills, among which are the following: to know when there is a need for information,
329
A Method for Adapting Learning Objects to Students’ Preferences
identify this need, to know how to work with different sources and symbolic systems, to control information overload, to evaluate and discriminate quality information, to structure information, to have skills in presenting thoughts, to be effective in using information to address the problem, and to communicate the information found to others. However, although one student may have all these skills, the success of the teaching-learning process is not guaranteed, and as in the classroom, students can be excellent, but if the teachers are unable to do their job properly, students will probably not achieve their goal: learning. So, the teacher is also an important item of the teaching - learning process, and is even more important when we talk about e-learning systems. Besides, having the responsibility to transfer knowledge, he should motivate the students and provide them with the appropriate learning tools among which is content design. It is important to play with innovation and creativity when teachers are designing courses, contents, and activities. The activities, or e-activities, as referred to in some environments, help students to stop being passive and to maintain a positive attitude. This is essential if they want to benefit from an e-learning system. Moreover, they should pursue that learning does not refer to a memory-storage of the information presented, but a cognitive restructuring, in other words they should carry out real learning actions. As a result, it is important to implement a collaborative and cooperative learning process, where there is a sense of team among the various members (both students and teachers) participating in the course, to encourage the participation of all stakeholders and to share knowledge. Moreover, this will help to solve one of the variables that most influence the failure of e-learning actions of: the sense of isolation and loneliness students are exposed to. Another factor that greatly influences (the process of) student learning is contents. Through contents knowledge transfer is possible, so the design should be carefully thought out. It is
330
important that they motivate students instead of having the opposite effect on them. This requires that the right amount of quality, appropriate to the characteristics of the group, and a proper structure.
The Teacher The teacher is responsible for guiding students through the learning process. He has other responsibilities which are relevant to making learning a success. One of these is to determine what contents make up the course and how to present them to students. This means that the teacher has to know perfectly the subject being taught and, the characteristics (skills, behavior, etc.) of the students’. Knowledge of the subject is essential because the teacher has to analyze what students should learn to select the appropriate contents, in other words contents which meet the subject needs and permit to acquire knowledge of the subject. In order to provide each student with the most appropriate content for his profile, the teacher needs to take into account the relationship between the student profile and the type of material set out below. Motivation is a very important factor in the learning process. It refers to the internal characteristics that, combined with external tactics and environmental factors, encourage and support students. Student performance depends largely on motivation; a motivated student will acquire more and better knowledge that one who is not. Self motivation totally depends on the student and is related to internal factors. However, the external tactics depends on the teacher, who should motivate students using different methods and techniques. Some of these are: •
Attractive contents: friendly, pleasant and entertaining contents boost (student) motivation. If these are suited to student profiles, the attitude will be positive and active.
A Method for Adapting Learning Objects to Students’ Preferences
Figure 1. Learning - teaching method
•
•
•
Feedback: It increases motivation because it permits the student to control his learning level. Back-up: It increases motivation and interest in the subject. Moreover, providing an incentive when students make a great effort, achieves a positive effect on their progress. Dynamism: Boosting the relationship between students and students-teacher; they should work collaboratively.
•
Knowledge of the objectives: Knowing the objectives permit students to plan their study time. Planning has a positive effect on motivation.
The teacher is not an isolated piece in the whole educational structure and teaching activities should be developed in accordance with established goals and objectives. In order to facilitate teaching, we propose a learning-teaching method which is presented in Figure 1.
331
A Method for Adapting Learning Objects to Students’ Preferences
Figure 2. Components of an e-learning course: main concepts
Student Learning Model The Felder-Silverman learning style model (FSLSM) is considered the most appropriate to be used in an e-learning system (Carver, 1999). This model, which has been presented in the literature review section, allows to categorize students according to their processing, perceiving, organizing and understanding information skills. It classifies students according to eight learning styles. Based on this dichotomy, the classification below shows how we could distribute teaching
resources through the different dimensions. We have distinguished between different kinds of resources: main elements, these items should be always in an e-learning course; complementary, interactive and evaluation elements, some of them could be optional; material format, resource type; and navigation tools, ways of navigation. For every resource a figure has been included to define what type of each resource is most appropriate to the learning style. For example, for the global learning style objectives and synthesis are the main contents to be included in the course.
Figure 3. Components of an e-learning course: complementary material and interactive and evaluable elements
332
A Method for Adapting Learning Objects to Students’ Preferences
Figure 4. Components of an e-learning course: format of material
Given its nature, e-learning systems allow the course content to be tailored to the individual characteristics of each student. Consequently, the teacher should take into account individual learning styles and generate the suitable content for each of them.
Structure for an E-Learning Course As we have already mentioned, the third basic component of an e-learning course is contents. It is therefore necessary to define it. We can say that “an e-learning content is all information, data and method, which are stored, supported and processed on e-learning platforms, and which do
not have a strict relation to items used for system management”; it is the knowledge to be transmitted to the student through an e-learning course. Implicitly, in the previous sections we have spoken of content because of the close relationship between teacher-content and student-content. Therefore, in this section, we will discuss the organization of the contents of a course, and consider the different concepts presented in the previous sections. The organization of the contents presents two perspectives or dimensions: physical and pedagogic. The physical perspective refers to how contents are organized or structured in an e-learning platform. The available network technologies can
Figure 5. Components of an e-learning course: navigation tools
333
A Method for Adapting Learning Objects to Students’ Preferences
Figure 6. The student preference adapted learning method
be used to implement any of the organizations proposed below. It ranges from the simplest and easiest to the most complex. •
•
334
Linear structure: Structure is reflected in a list or linked list. It is the simplest way to organize contents. We move in the same way as if we were reading a book, so from one page, you could go to the next or the previous one. Query options for students are limited, they can only go forward or backward. It is very useful when we want the student to follow a fixed or guided path. In addition, this prevents distraction because the student cannot access other contents that are not interesting at a particular moment, which is this structure helps the student to focus on a particular topic. However, the contents should not be too long and uninteresting to prevent boredom. Hierarchical structure: This is a typical tree structure, where the root is the leaf which contains the overall objectives of the course, and the related subjects. The selection of a topic permits access to its contents. Making the contents complex is one of the best ways to organize it. The concepts are divided into more specific topics so that students move up and down, as new concepts appear.
•
Network structure: This is an organization where there is no seemingly established order. Contents can join each other in any order. A network structure is used when we do not want to restrict the paths that the user can access. This structure is the most dangerous because if students are unable to work in a structure of this type, they may be lost or unable to find what they are looking for.
The pedagogical perspective refers to what contents should be a unit of learning and how organize them. The Elaboration Theory, which was developed by Charles M. Reigeluth (1979-1983), is an important research on learning psychology. It defines a way to structure and organize course contents to achieve optimal acquisition, retention and knowledge transfer. A sequence based on the principles of this theory is to order the contents from easier and general to the most complex and specific concepts. In short, to improve learning outcomes it is advisable to (1) present initially contents in a general way as a comprehensive overview of the unit study, and (2) go deeply into any topic. We propose the use of this theory to organize e-learning course contents. Figure 6 shows our proposal. The Student Preference Adapted Learning Method (SPAL Method) has been defined to structure contents (Knowledge) according to student learning style (Student) the suitable technology to show them (Technology), and the support and guide provided by the teacher when it is required (Teacher). In the SPAL Method, the course content structure is the key element and it will condition student learning. The learning-teaching method has been used to define the pedagogical structure of the course, in other words what teaching resources will be used. This structure is presented in Figure 7. A course is composed of learning objects, activities, tests, and additional contents. A learning object is the theory student must learn and is
A Method for Adapting Learning Objects to Students’ Preferences
Figure 7. Pedagogical structure of a course
adapted to his learning style. An activity is a way to determine if the student has learnt. It can be self assessment, if the goal is for the student to verify the degree of knowledge acquired; or practice, if the goal is for the student to check his progress. Tests are a different way of verifying the degree of knowledge the student acquires. Finally, there are other additional resources such as Articles, Bibliography, URLs or Glossary that are included to facilitate e student learning. When a teacher is developing the contents of a course, he has to take into account the specific characteristics of each learning style, based on the categories presented in the Student Learning Model section. So, following the SPAL Method teachers can design e-learning courses to meet the needs of their students depending on the specific preferences of each one. This allows student motivation to increase and therefore results are considerably better.
FUTURE TRENDS One way to improve the quality of online courses is that teachers take an active role. It is common for the teaching staff not to get involved in the teaching-learning process. They only pack knowledge into an e-learning platform, and expect the
platform to do the rest. However, the platform is just a mechanism that allows collaborative and cooperative work, so it is necessary the teaching staff get involved in the process and take an active role. Teachers should know the different pedagogical factors that influence student learning, such as learning styles and learning theories, and the use of didactic resources and methodologies. Research should begin with an assessment process to determinate how the active involvement of the teacher influences teaching-learning process. With the results, they could begin a research work to establish a proposal of new didactic resources and methodologies. Because people learn differently, it is necessary to offer different forms of learning. So, e-learning courses should be adapted to the learning preferences of each student. Currently, technology allows to elaborate a profile of each student, considering their particular features. From this profile, the way in which every course is presented and its contents could be personalized, highlighting those contents that best fit with the student learning model, offering navigation tools appropriated to the profile, providing different activities, etc. All these actions require the active participation of the teacher who designs and develops courses according to these considerations. Learning style cannot be taken into account if teachers are un-
335
A Method for Adapting Learning Objects to Students’ Preferences
aware of what they are doing. So, the first step is to make the teaching staff aware of all these issues. After, it is necessary to begin an experimentation process to evaluate how affect the adapted contents to student performance. To do this task, teachers have to create learning courses which will be composed of contents adapted to student profile. The research analyzes the obtained results and they would establish a new direction to improve learning contents. Moreover, new technologies should be developed to make it easier for teachers to create different types of contents in different formats within a reasonable time and cost. They will avail students of the different resources to maximize their effort.
CONCLUSION The objective of this chapter has been to analyze different factors that influence student learning. To achieve this goal it was necessary to review different learning theories and different learning styles. After that, the authors analyzed the role of teachers and their main responsibilities, and students’ learning process in order to propose a pedagogical structure for an e-learning course. The relevant role that both teaching contents and e-learning play were also discussed from the perspective that highly-skilled and excellent students are not enough. An active teacher who participles and creates high quality contents is necessary to prevent the sense of isolation, discouragement and lack of motivation. Considering all these factors and the special features of each student as regards the way he learns, this chapter has proposed a new method that facilitates teaching and adapts knowledge to special preferences of each student.
336
REFERENCES Advanced Distributed Learning (ADL). (2009). Retrieved on December 29, 2009, from http:// www.adlnet.org. Anderson, T. (2003). Modes of interaction in distance education: Recent developments and research questions. In Moore, M. G., & Anderson, W. G. (Eds.), Handbook of distance education. Atkinson, G. (1991). Kolb’s learning style inventory: A practitioner’s perspective. Measurement & Evaluation in Counseling & Development, 23(4), 149–161. Barrit, C., Lewis, D., & Wieseler, W. (1999). CISCO Systems reusable information object strategy version 3.0. Cisco Whitepaper. Retrieved on December 29, 2009, from http://www.cisco.com /warp/ public/779/ibs/ solutions/learning/ whitepapers/ el_cisco_rio.pdf Biggs, J. B. (1993). From theory to practice: A cognitive system approach, higher education. Research for Development, 12, 73–85. Cabero, J. (2006). Bases pedagógicas del elearning. Revista de Universidad y Sociedad de Conocimiento, 3. (1). Canfield, A. A. (1992). Canfield learning styles inventory manual. Los Angeles, CA: Western Psychological Services. Carver, C. A., Howard, R. A., & Lane, W. D. (1999). Addressing different learning styles through course hypermedia. IEEE Transactions on Education, 42. Dunn, R., & Dunn, K. (1999). The complete guide to the learning styles inservice system. Allyn and Bacon. Duval, E., & Hodgins, W. (2003). A LOM research agenda. In G. Hencsey, B. White, Y. Chen, L. Kovacs, & S. Lawrence (Eds.) Proceedings of the 12th International Conference on World Wide Web, Budapest, Hungary, (pp. 659–667).
A Method for Adapting Learning Objects to Students’ Preferences
EDUCAUSE. (2009). Learning initiative. Retrieved on December 29, 2009, from http://www. educause.edu/ Fernandes, E., Madhour, H., Miniaoui, S., & Forte, M. W. (2005). Phoenix Tool: A support to semantic learning model. Workshop on Applications of SemanticWeb Technologies for e-Learning (SWEL@ICALT’05), Kaohsiung, Taiwan, 5–8 July 2005. Honey, P., & Mumford, A. (1992). The manual of learning styles. Maidenhead, UK: Peter Honey. Jonassen, D. H. (1993). Thinking technology: Context is everything. Educational Technology, 31(6), 35–37. Kanninen, E. (2009). Learning styles and e-learning. Master of Science Thesis, Master’s Degree Programme in Electrical Engineering. Keefe, J. W. (1986). Learning style profile. National Association of Secondary School Principals. Kolb, D. A. (1984). Experiential learning: Experience as the source of learning and development. Englewood Cliffs, NJ: Prentice Hall. L’Allier, J. J. (1997). Frame of reference: NETg’s map to the products, their structure and core beliefs. NetGwhitepaper. Retrieved from www. netg.com Lawrence, G. (1993). People types and tiger Stripes: A practical guide to learning styles (3rd ed.). Gainesville, FL: Center for Applications of Psychological Type. LTM (Learning Type Measurement). (2004) Discover your learning styles graphically. Retrieved on December 29, 2009, from www.learningstylesonline.com Navy Integrated Learning Environment (Navy ILE). (2009). Introduction. Retrieved on December 29, 2009, from https://ile-help.nko.navy. mil/ile/
O’Loughlin, M. (1992). Rethinking science education: Beyond Piagetian constructivism toward a sociocultural model of teaching and learning. Journal of Research in Science Teaching, 29(8), 791–820. doi:10.1002/tea.3660290805 Pittenger, D. J. (1993). The utility of the MyersBriggs type indicator. Review of Educational Research, 63, 467–488. Reusable Learning. (2009). Retrieved on December 29, 2009, from http://www.reusablelearning. org Schluep, S., Bettoni, M., & Guttormsen Schär, S. (2005). Modularization and structured markup for Web-based learning content in an academic environment. In Proceedings of the PROLEARNiClass thematic workshop on Learning Objects in Context, Leuven, Belgium. Schuell, T. J. (1986). Cognitive conceptions of learning. Review of Educational Research, (Winter): 411–436. Sharable Content Object Reference Model (SCORM). (2009). Retrieved on December 29, 2009, from http://www.adlnet.org Slavin, R. E. (1990). Cooperative learning: Theory, research, and practice. Englewood Cliffs, NJ: Prentice Hall. Yarusso, L. (1992). Constructivism vs. objectivism. Performance and Instruction Journal, April, 7–9. Zwanenberg, N. V., Wilkinson, L. J., & Anderson, A. (2000). Felder and Silverman’s index of learning styles and Honey and Mumford’s learning styles questionnaire: How do they compare and do they predict academic performance? Educational Psychology, 20(3), 365–381. doi:10.1080/713663743
337
A Method for Adapting Learning Objects to Students’ Preferences
KEY TERMS AND DEFINITIONS E-Learning Contents: They are pieces of knowledge, data and method, which are stored, supported and processed on e-learning platforms, and which do not have a strict connection with items used for system management. E-Learning Course: A way to transfer skills and knowledge through the virtual space in an online system. This implies that students neither need to be physically in the same place nor at the same time. Learning Model: An approach which specifies the learning styles that allows to classify to students with the purpose of adapting the learning-teaching process to their individual features. Learning Objects: They are a digital material collection composed of information objects which define instructional value that can be used
338
in the learning-teaching process and are relate to a single learning objective. Learning Style: An approach or way of learning. Students take in and process information in different ways, each one is a learning style. It involves educating methods that are presumed to allow that individual to learn best. Pedagogical Factor: A collection of factors that influence greatly in the quality of students learning and the teachers should take into account when they design and develop didactic resources or material. Student Profile: A collection of individual characteristics that determine the way in which a student learn. Teaching-Learning Process: Process through which students acquire new skills, abilities, knowledge, behaviors, or values as a result of the study, experience, training, reasoning and observation.
Section 4
Other Applications of Theory and Method
340
Chapter 15
Understanding Graduate Students’ Intended Use of Distance Education Platforms María del Carmen Jiménez-Munguía Universidad de las Américas Puebla, México Luis Felipe Luna-Reyes Universidad de las Américas Puebla, México
ABSTRACT The objective of this chapter is to use the Unified Theory of the Acceptance and Use of Technology to better understand graduate students’ intended use of distance education platforms, using as a case a distance education platform of a Mexican University, the SERUDLAP system. Four constructs are hypothesized to play a significant role: performance expectancy, effort expectancy, social influence, and attitude toward using technology; the moderating factors were gender and voluntariness of use. Data for the study was gathered through an online survey with a response rate of about 41%. Results suggested that the performance expectancy and attitude towards technology are factors that help us understand graduate students’ intended use of a distance education platform. Future research must consider the impact of some factors, such as previous experiences, age, and facilitating conditions in order to better understand the students’ behavior.
INTRODUCTION The global economy emerges as a technocognitive stage of Capitalism development (Boisier, 2005). That is to say, globalization is not only an economic phenomenon, but also informational (Castells, 2002). This economy is informational because productivity and competitiveness of DOI: 10.4018/978-1-60960-615-2.ch015
economic units or agents depend on their ability of effectively generating, processing and using information and knowledge; and it is global given that production, distribution and consumption of goods and services are organized on a global scale, fundamentally through networks supported by new interactive technologies (Castells, 2002). In an attempt to recruit and educate qualified people, universities and colleges have joined the globalization process by offering on-line
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
Understanding Graduate Students’ Intended Use of Distance Education Platforms
courses and programs. In the last years, on-line education has had a territorial as well as technological expansion, mainly due to advances in telecommunications and the increasing demand for continuing education. Administrative and instructional technologies are being developed in the context of distance and online education, creating opportunities for students and professionals to gain competitive advantage by exploiting the benefits of technology (Robinson, 2006). The landscape of distance education is being driven by the growing acceptance and popularity of online course offerings and online degree programs at universities (Eom, Wen, & Ashill, 2006). Kathawala, Abdou and Elmuti (2002) contended that, at the present time, people are looking for online graduate courses through Internet, because they require less individual commitment, or do not generate problems relocating their family. In addition, online education is a way to obtain professional development without leaving their work in a flexible way because the information is available at any time and it is conducive to a just in time and just for me learning (Cabero, 2006; Herrera-Corona, Mendoza-Zaragoza, & Buenabad-Arias, 2009). In other words, professionals are now choosing distance education to better meet their andragogical needs such as experimental learning and active engagement in their learning process (Lewis & Price, 2007). Professionals who often seek graduate degrees in order to obtain better job opportunities have started to adopt online education in Mexico. They are exploring alternative education formats that better adjust to their learning styles, job needs and temporary geographic location (Barrón, 2004). They looked for a solution to satisfy their necessities and demands to diversify and to make flexible the opportunities to learn, from any place and at any time, in this changing society (HerreraCorona, et al., 2009). Notable advancements in e-learning technology have included the introduction of learning
management systems (Isodynamic, 2001). Higher education institutions are making more investments in technology tools but their influence on students’ usage must be considered. The attitude toward acceptance and use of technology must be included in planning and implementing new technologies at higher education institutions (Robinson, 2006). Although there is important literature on technology acceptance, its application in distance education platforms is scarce. The Technology Acceptance Model (TAM) has had an important influence in the investigation on acceptance and use of technology; it has been complemented by a number of behavioral investigations. In an effort to integrate all the investigation on technology acceptance, Venkatesh, Morris, Davis & Davis (2003) conceived the Unified Theory of Acceptance and Use of Technology (UTAUT), compiling and synthesizing studies on technology and human behavior. Moreover, constructs included in the UTAUT model such as Performance Expectancy, Effort Expectancy or Facilitating Conditions are closely related to student satisfaction. The objective of this work is to use the UTAUT model to better understand Graduate Students’ Intended Use of Distance Education Platforms, using as a case a distance education platform of a Mexican University, the SERUDLAP system. The chapter is organized in eight sections including this introduction. The second section includes an overview of distance education in Mexico as well as a conceptual presentation of technology acceptance models with a focus on the UTAUT model. The third section presents the research model and the hypotheses that guide this study. The fourth section constitutes a description of the methods used in the research reported in the chapter. The fifth and sixth sections present the main results, as well as a discussion of the main results and some practical recommendations. The last two sections in the chapter point to future research directions and concluding remarks.
341
Understanding Graduate Students’ Intended Use of Distance Education Platforms
BACKGROUND In this section of the document, we start providing some contextual information about the history and current status of distance education and elearning in Mexico. As a second component of the section, we continue with a description of the UTAUT model.
Distance Education and E-Learning in Mexico Although the literature has paid little attention to the phenomenon of distance education in Mexico, it has a tradition of more than 70 years, starting in 1933 when the magazine El Maestro Rural provided the first correspondence courses to teachers in rural areas (Bosco & Barrón, 2008). Later, in 1947, the Ministry of Education founded the Federal Institute for Teacher Training, which became the first formal effort of distance education in Latin America. The model included printed materials sent by mail, combined with radio lessons, face-to-face practices and evaluations in centers located close to the teachers’ working location (Enríquez Alvarez et al., 2001). In the case of K-12 education, the example with longer tradition is the Telesecundaria, which started as a pilot project in 1966 to serve students of grades 7th to 9th in rural areas, becoming a national system in 1967. The Telesecundaria model is based on TV programs supported by printed materials and a local facilitator. The model has been very successful, and has served as a model for several countries in Central America (Enríquez Alvarez, et al., 2001). During the 80’s, similar projects started to serve grades 10th to 12th and provide technical and vocational education, as well as adult literacy programs. Also, adult distance education programs have extended to provide compulsory education all through high school (Enríquez Alvarez, et al., 2001). Many of these programs have been developed, as in other developing countries, as an strategy to promote
342
equal access to educational services and to reduce poverty (Larson & E.Murray, 2008). In the case of higher education, distance education programs started in 1972, when the National University UNAM created the Open University System (Bosco & Barrón, 2008). Currently, this system offers distance education including high school, 20 undergraduate majors and 16 graduate programs. In 2003, higher education institutions in Mexico served about 2.4 million students (around 3% of the total population), and about 155 thousand of them were enrolled in a distance education program, representing about 7% of the total enrollment (Rubio Oca, 2006). With the introduction of the Internet, many Mexican institutions started to offer Internet-based programs, particularly during the late 90’s and the early 2000’s (Garrido Noguera & Thirión, 2006). According to a survey among members of the National Association of Universities and Higher Education Institutions, 41% of the Mexican colleges and universities have a distance education program, and another 50% are planning to start distance education programs (Ortega Amieva, 2006). Current programs use the Internet and other computer-based systems combined with printed materials, videoconferences, phone calls, TV and radio programs. In fact, this national association of universities has included in its general strategy the promotion of high-quality distance education, and the exchange of knowledge and best practices among higher education institutions in the country. In spite of the efforts described in the previous paragraphs, only few universities are recognized as having a consolidated distance education programs that take advantage of Internet technologies (e-learning), and five of them serve around 18% of the student population enrolled in this kind of programs, and only 16 Mexican universities are offering e-learning programs (Garrido Noguera & Thirión, 2006). The growth of e-learning programs may be limited by the current access to computers and Internet in the country. Although computer
Understanding Graduate Students’ Intended Use of Distance Education Platforms
penetration has grown from 16.7 percent in 2001 to 36.2 in 2009, and Internet penetration from 8% to 28.3% in the same years (INEGI, 2009), access to computers and the Internet are perceived as a main barrier for e-learning (Rubio Oca, 2006). There is a diversity of e-learning models in the country, as many as institutions offering this kind of programs (Torres Nabel, 2006), and more than 50% of the Mexican universities offering e-learning have developed their own technical platform, and about 30% use Blackboard (Ortega Amieva, 2006). Finally, a recent survey among students enrolled in the three main e-learning programs in Mexico revealed that the main reasons for Mexican students to attend this kind of programs are the accessibility to materials 24/7, which gives them flexibility to study at any time according to their needs, and the possibility to study from home or work, which shows their appreciation for managing their time and location (Herrera Corona, Mendoza Zaragoza, & Buenabad Arias, 2009). This same study identifies as one of the main weaknesses of these programs in Mexico the difficulty to interact with other students and faculty in the learning process.
The UTAUT Model The TAM was designed to predict information about acceptance and usage of technology, according to the perceptions of the users in their workplace. These perceptions influence users’ intentions to use technology, which in turn influence whether or not they will actually use it (Arbaugh & Warell, 2008). Since its initial development, the TAM has been extensively used in different environments to understand technology acceptance. Some technologies explored with TAM have been cellular technology (Je Ho & Park, 2005), on-line banking (Pikkarainen, Pikkarainen, Karjaluoto, & Pahnila, 2004) or the use of wireless Internet (Lu, Yu, Liu, & Yao, 2003). The TAM has also been
used in several studies as a framework to explain usage, satisfaction, and ease of use of technology in learning environments such as the Internet as an educational delivery medium, for instance in the case of WebCT and the CECIL system at the University of Auckland (Arbaugh & Warell, 2008). The Technology Acceptance Model is one among many competing views in understanding users’ acceptance of technology. This model, proposed originally by Davis in 1985, was used to develop the Unified Theory of the Acceptance and Use of Technology (UTAUT), as a highlevel model to explore the determinants of the acceptance and use of information technologies by individuals (Venkatesh, et al., 2003). In order to develop UTAUT, eight theoretical frameworks and the variables that influence the behavioral intention of technology adoption and usage were integrated. In the next paragraph we briefly introduce each of these models. From the Theory of Reasoned Action (TRA) were taken two constructs such as attitude toward behavior and subjective norm (Fishbein & Ajzen, 1975). The TRA was followed by the TAM by Davis, Bagozzi & Warshaw (1989) where the perception of usefulness and easiness of use were considered to predict information about acceptance and usage of technology. The Motivational Model (MM) studies behavior by means of the hierarchy of reasons that everyone has in their intrinsic and extrinsic motivation. The Theory of Planned Behavior (TPB) extended the TRA by adding the construct of perceived behavioral control. Later on, a hybrid model called the CTAM-TPB combined the predictors of TPB with perceived usefulness from TAM. The Model of PC Utilization (MPCU) presents a competing perspective to that proposed by TRA and TPB, and included the construct of facilitating conditions. The Innovation Diffusion Theory (IDT), grounded in sociology to study a variety of innovations, was adapted by Moore and Benbasat (1996) who found support for the predictive validity of the innovation characteristics measuring constructs such as
343
Understanding Graduate Students’ Intended Use of Distance Education Platforms
Figure 1. The UTAUT model (Venkatesh, et al., 2003)
relative advantage, image, visibility, compatibility and voluntariness of use. The Social Cognitive Theory (SCT) developed by Bandura (1986) groups three aspects to define human behavior: the interaction, the dynamics and the reciprocity of an individual. Compeau and Higgins (1995) applied and extended the SCT model to computer utilization and found variables that influence the usage of them such as the expectations of performance, self-efficacy affect and anxiety. Venkatesh and his colleagues (2003) compared the eight theoretical frameworks both conceptually and empirically to develop UTAUT, which outperformed all individual frameworks explaining 70 percent of the variance in usage intention. The four constructs that were considered as direct determinants of user acceptance and usage behavior are performance expectancy, effort expectancy, social influence and facilitating conditions. Each construct has been developed from models that have been demonstrated robustness in various organizational settings (Robinson, 2006). This model incorporates moderator variables such
344
as age, gender, prior experience and voluntariness of the technology use. According to Venkatesh and his colleagues (2003), the UTAUT explains as much as the 70 percent of the variance in intention to explain individual acceptance and usage of technology (Figure 1). In this model, performance expectancy refers to the degree to which an individual believes that using the technology will help him/her in performance gain, effort expectancy is defined as the level of simplicity associated with the use of the system, social influence is how an individual perceives that important others believe that he/she should use the system, and facilitating conditions are the degree to which an individual believes that an organizational and technical infrastructure exists to support the use of the system. The UTAUT model have been used to investigate the determinants of intention to use technology by students in areas such as marketing (Robinson, 2006), to explore the factors that influence medical teachers’ acceptance of information and communication technology (ICT) integration in the classroom (Birch & Irvine, 2009), to inves-
Understanding Graduate Students’ Intended Use of Distance Education Platforms
Figure 2. Research model
tigate the determinants of mobile Internet acceptance and to understand whether or not there are gender effects (Hsiu-Yuan & Shwu-Huey, 2010) and to determine the extent to which students used and accepted the M-Learning as an education delivery method (Williams, 2009). The authors believe that the use of the UTAUT model for the purpose of the present study is then suitable.
RESEARCH MODEL AND HYPOTHESIS As we mentioned before, the UTAUT model will be used to investigate graduate students’ intended use of the SERUDLAP system. The constructs and relationships between them can be seen in the research model used in this chapter (see Figure 2). Four constructs are hypothesized to play a significant role as direct determinants of user acceptance and usage behavior: performance expectancy, effort expectancy, social influence and attitude toward using technology. Although
original UTAUT findings show no significant impact from attitude towards technology, Venkatesh et al. (2003) recommended to further explore the effects of this variable in behavioral intention to use the technology. Moreover, we decided to include it given that enrollment and use of the distance learning platform is voluntary, and the attitude towards technology may play a role. As suggested by UTAUT, performance and effort expectancy are moderated by gender; and the social influence is moderated by gender and voluntariness of use. The actual use behavior is not included in the model in this chapter because we did not have direct access to use statistics in the SERUDLAP server, and because of internal institutional policies, these statistics could not be disclosed for research purposes. In this way, the construct of facilitating conditions was not included either in the final model because it is only related to the actual use behavior. On the other hand, the number of respondents was low (only 112) in comparison to the number of variables included in the UTAUT model, which forced us to take the deci-
345
Understanding Graduate Students’ Intended Use of Distance Education Platforms
sions of excluding age and experience using computers from the model. We made the decision in terms of the relatively low variance in these two variables. The study was conducted through an Internetbased survey applied to a sample of graduate students registered in distance education courses using the SERUDLAP system, which uses Sharepoint as its technical platform. The design was cross-sectional since our investigation collected data at a single moment of time.
Hypothesis In this way, this study was guided by several propositions made by Venkatesh et al. (2003), including the following hypotheses: H1: Performance expectancy by the student is positively related to his behavioral intention to use the SERUDLAP system. H2: Effort expectancy by the student is positively related to his behavioral intention to use the SERUDLAP system. H3: The social influence is positively related to the student’s behavioral intention to use the SERUDLAP system. H4: The student’s attitude toward using technology is directly related to his behavioral intention to use the SERUDLAP system. H5: The effect of performance expectancy on the behavioral intention to use SERUDLAP system is moderated by gender. H6: The effect of effort expectancy on the behavioral intention to use SERUDLAP system is moderated by gender. H7: The effect of social influence on the behavioral intention to use SERUDLAP system is moderated by gender. H8: The effect of social influence on the behavioral intention to use SERUDLAP system is moderated by voluntariness of use.
346
METHODS The population of this study consisted of students at the Universidad de las Américas Puebla who were enrolled within the online graduate program of SERUDLAP during the semester of Spring 2007. The SERUDLAP system is based on a project-based learning philosophy, in which the professor designs a series of tasks, selecting for each of them a basic set of supporting materials. Supporting materials include readings, on-line presentations and other Internet resources. Students work independently or in small teams to develop each task. Both assignments and student work and faculty feedback are managed through a Sharepoint-based document repository. The Sharepoint template also includes a bulletin board for announcements, and a discussion board for asynchronous questions or topical discussions. Each course also involves synchronous conversations via chat to clarify questions or to motivate further research. SERUDLAP offers graduate majors in the areas of businesses, humanities, social sciences and engineering. Students enrolled in these programs are geographically dispersed across the country. One of the entrance requirements to the program is to have some experience and basic computing skills. Data collection was based on an online survey. The survey was generated in Survey Monkey. The program does not allow a double response because it conserves a registry of the IP address of the participant; it also generates a database that facilitates the compilation of the responses to the survey. The invitation to answer the survey was sent to the whole population enrolled in any SERUDLAP program (270 students). An invitation letter was made in order to invite the student population to answer the online survey. The letter included a brief summary of the purpose of the research and had the survey URL. The SERUDLAP director who collaborated in the research sent the letter to students via electronic mail.
Understanding Graduate Students’ Intended Use of Distance Education Platforms
The survey was available for a month and a half and every two weeks a reminder was sent out to the students in order increase the response rate. Total students enrolled during the period analyzed were 270. The response rate was 44.1% (119) but only 112 were completely answered (41.5% of the total population) and used in the further analysis of the data. Among the 112 respondents, the 58% were male and 42% were female. The survey instrument was based on the work of Venkatesh and his colleagues (2003), who suggested and validated a series of items for each one of the variables including in the model of Figure 1. The general process that was followed for the development of the survey instrument consisted of three stages. In the first stage all proposed items of the UTAUT Model were translated into Spanish. Some of them were selected and adapted to create an initial survey instrument. A panel of three experts evaluated this modified version of the UTAUT survey. Finally, the survey was piloted with a group of 30 students at the university, selecting for each independent variable in the model 5 items, using Cronbach’s Alpha as a basis for the final selection of the items in each scale. As a result from the initial feedback of the Director of SERUDLAP, the items were reviewed and modified to make them still more specific. In the following paragraphs, we include the definitions of each of the variables included in the research model. Items included in each scale as well as the codes used in the study are shown in Table 1.1 Each of the items employed a seven-point Likert-type scale, where number 1 meant, “strongly disagree” and 7 “strongly agree”. Performance expectancy: This variable measures the degree in which an individual thinks that to use the system will help him to improve his personal performance in work. For the case of the SERUDLAP system, the personal performance is associated with performance to learn. From the initial 24 items, only 5 were chosen and codified as PE.
Effort expectancy: Variable that measures the users’ perception on easiness of using a determined system; in this case, the SERUDLAP system. From 14 possible items, 5 were selected and codified as EE. Social influence: It measures the way in which the social surroundings of the individual influence its decision to use a new technology or not to use it. It is considered a direct determinant of the behavioral intention. Only 5 items were chosen from the proposed 9 and codified as SI. Attitude toward using technology: This variable is defined as an individual’s overall affective reaction to use a system and 5 items were chosen to measure it and codified as ATUT. Behavioral intention: According with the UTAUT proposal, these questions have been adapted to our investigation. From the ones included in the UTAUT’s model, only 1 was used and codified as BI. Moderating variables: The UTAUT model also includes 4 moderators as determinants of intention and behavior: gender, age, experience and voluntariness of use. As we explained in the previous section, in this investigation we only considered gender and voluntariness of use. Gender moderates the effects of performance expectancy, effort expectancy and social influence, and voluntariness of use only moderates the effect of social influence on behavioral intention to use the system. The codes used to distinguish each one of these moderating variables are G (gender) and V (voluntariness of use). Linear regression was employed to examine the determinants of students, acceptance and usage of technology. The following section includes a summary of the main results from the analysis.
RESULTS This section of the chapter includes the main results from the research. It starts showing descriptive statistics and correlations for each of the main
347
Understanding Graduate Students’ Intended Use of Distance Education Platforms
Table 1. Summary of survey items Performance expectancy PE1
Using the SERUDLAP system allows me to learn more quickly
PE2
The SERUDLAP system allows me to learn better
PE3
I find the SERUDLAP system useful to learn
PE4
Using the SERUDLAP system makes learning easier
PE5
Using the SERUDLAP system can significantly improve the quality of knowledge
Effort expectancy EE1
The SERUDLAP system is easy to use
EE2
It is easy to make the SERUDLAP system do what I want it to do
EE3
My interaction with the SERUDLAP system is clear and understandable
EE4
Learning to use the SERUDLAP system is easy for me
EE5
I believe that the interaction with the SERUDLAP system is very flexible
Social influence SI1
I use the SERUDLAP system because of the proportion of fellow workers who have used it
SI2
People who are important to me think I should use the SERUDLAP system
SI3
The organization where I worked supports my use of the SERUDLAP system
SI4
People who influence my behavior think that I should use the SERUDLAP system
SI5
My manager or supervisor supports me in an important way to use the SERUDLAP system
Attitude toward using technology ATUT1
I like to work with the SERUDLAP system
ATUT2
Using the SERUDLAP system is fun
ATUT3
Using the SERUDLAP system is pleasant
ATUT4
The SERUDLAP system makes learning more interesting
ATUT5
I quickly become bored when using the SERUDLAP system
Behavioral intention BI
I would not doubt in attending another graduate program in the SERUDLAP system
Voluntariness of use V
Because my work schedule, the on-line graduate program in the SERUDLAP system is my best choice
constructs included in the study, as well as basic indicators of the reliability of the scales. It finishes showing the main results of the regression analysis. In order to measure the internal consistency of the scale, Cronbach’s alpha was used (Gay, Mills, & Airasian, 2006); Table 2 contains the alpha values of all scales used in the study. Even though some authors recommended that the Cronbach’s alpha coefficients’ value should be above 0.70 (Nunnally & Bernstein, 1994), others agree that an alpha above 0.50 will reveal an adequate
348
level of reliability (Hair, Anderson, Tatham, & Black, 1998). Given that the alpha coefficients range from 0.674 to 0.881, it can be said that the values showed good internal consistency among scales employed for this study. Table 2 also includes descriptive statistics of the main variables included in the study as a way to summarize survey responses (Gay, et al., 2006). All variables, except social influence, show a mean value above the neutral point of the Likert scale. Voluntariness of use is the variable with the
Understanding Graduate Students’ Intended Use of Distance Education Platforms
Table 2. Reliabilities, scales means, standard deviations and correlations Variable Performance expectancy – PE
Alphaa .881
Mean 4.735
StdD
PE
EE
SI
ATUT
BI
1.160
Effort expectancy – EE
.761
5.188
1.056
.679**
Social influence – SI
.674
3.903
1.029
.621**
.452**
Attitude toward using technology -ATUT
.858
4.827
1.174
.741**
.765**
.542**
Behavioral intention - BI
4.831
1.770
.713**
.595**
.470**
.724**
Voluntariness of use - V
6.438
1.137
.264**
.262**
.154
.273**
.319**
Only includes scales with more than one item ** Correlation is significant at the 0.01 level (2-tailed)
a
Table 3. t-values, significance, ß and FIV Scale Performance expectancy – PE
T
Sig
ß
VIF
2.970**
.004
.392
4.42
Effort expectancy – EE
0.494
.622
.064
4.29
Social influence – SI
-1.191
.236
-.126
2.86
3.590***
.001
.419
3.45
Attitude toward using technology -ATUT Voluntariness of use - V
2.059*
.042
.149
1.33
Gender - G
-0.130
.897
-.008
1.04
G * PE
0.189
.850
.024
4.00
G * EE
-1.149
.253
-.126
3.05
G * SI
1.146
.255
.125
3.02
V * SI
0.066
.370
.066
1.35
* Significant at the 0.05 level ** Significant at the 0.01 level *** Significant at the 0.001 level
higher value, followed by effort expectancy, and social influence is the scale with the lower value. Finally, Table 2 also includes bi-variate correlations among the variables in the study. Pearson correlation coefficient was used given that it is the more stable measure of correlation (Gay, et al., 2006). All but one correlation are significant at the 0.01 level. Finally, regression analysis was conducted to investigate how different factors affect the behavioral intention to use the SERUDLAP system, using Ordinary Least Squares (OLS). Given the values of the correlations among variables, interaction effects were calculated after centering the variables involved in the products. Moreover,
we conducted multicollinearity diagnostics. As shown in Table 3, all VIF values are under 10, which is commonly accepted as a threshold value. Values lower than 10 reflect no multicollinearity problems. The performance expectancy, the attitude toward using technology and the voluntariness of use are statistically significant while the effort expectancy, social influence and gender are statistically non-significant. Indeed, the overall model was statistically significant (R2=0.614, F= 15.6, Sig=0.000). Finally, the analysis shows that there is no significant moderating effect from gender or voluntariness of use in any other variable.
349
Understanding Graduate Students’ Intended Use of Distance Education Platforms
Table 4. Outcomes of hypothesis test Hypothesis
Outcome
H1: Performance expectancy by the students is positively related to his behavioral intention to use the SERUDLAP system.
Supported
H2: Effort expectancy by the student is positively related to his behavioral intention to use the SERUDLAP system.
Not supported
H3: The social influence is positively related to the student’s behavioral intention to use the SERUDLAP system.
Not supported
H4: The student’s attitude toward using technology is directly related to his behavioral intention to use the SERUDLAP system.
Supported
H5: The effect of performance expectancy on the behavioral intention to use SERUDLAP system is moderated by gender.
Not supported
H6: The effect of effort expectancy on the behavioral intention to use SERUDLAP system is moderated by gender.
Not supported
H7: The effect of social influence on the behavioral intention to use SERUDLAP system is moderated by gender.
Not supported
H8: The effect of social influence on the behavioral intention to use SERUDLAP system is moderated by voluntariness of use.
Not Supported
DISCUSSION AND RECOMMENDATIONS Table 4 summarizes the main conclusions with regard to the hypotheses of this study. The first hypothesis (H1) is supported by the data, indicating that students’ behavioral intention to use the SERUDLAP system was determined by their performance expectancy; that is, their expectations about learning and knowledge acquisition. The result is consistent with the findings of Duyck and his colleagues (2008) where the performance expectancy was salient in predicting the behavior intention to learn and use new technology. As mentioned by Robinson (2006), students are “likely to evaluate the technology based on how it is going to assist them in accomplishing their educational goals” (p. 86). In this study, their expectations to learn new professional skills impact the way they might accept and use technology. Effort expectancy (H2) is not a determinant of behavioral intention for respondents in this sample. It seems that for students in this sample, effort needed to complete their degree was not important as long as they could learn and reach new professional skills. The result is not in line with what was found by Birch and Irvine (2009) who found that the only significant predictor of behavior intention is the effort expectancy. Effort expectancy (i.e. how easy is to use the system) was the construct with the higher mean value,
350
with the only exception of voluntariness of use (see Table 2). Moreover, although there is not moderation of voluntariness of use, there is a direct positive relationship between voluntariness of use and behavioral intention to use the system. Thus, an explanation of such discrepancy could be that, in this particular case of students that registered voluntarily to e-learning classes because it was the only way in which they could achieve their continuing education needs (voluntariness of use), effort expectancy may take a secondary role. Social influence (H3) is another variable that appears not to be related to behavioral intention in this sample. Again, this result is different from the results from Díaz and Loraas (2010) who found that social influence is important. Contrary to this, the result is parallel to Duyck et al. (2008) who supported the findings that the social influence is not significant, even though the effort expectancy might determine the behavioral intention. We find a potential explanation to this result also in the fact that students in the sample are not in the same organization, and that the social influence construct was developed for a setting involving people from only one organization, reflecting mainly peer influence or support from your organization or supervisor. The diversity of organizational settings to which students belong may eliminate the existence of this effect. On the other hand, e-learning in Mexico is still in its initial
Understanding Graduate Students’ Intended Use of Distance Education Platforms
stages. It may be possible that social influence becomes a determinant when e-learning reaches a higher market penetration. It was found that the students’ attitude toward using technology has a positive effect to their behavioral intention when using the SERUDLAP system (H4). In line with Robinson (2006), Je Ho and Park (2005) and Duyck, et al. (2008), a person’s positive attitude toward technology will lead a favorable intention when using it. If a student has not a positive attitude towards using technology, his behavioral intention to use it decreases (Diaz & Loraas, 2010). This finding is different from what was found by Venkatesh, et al (2003), but consistent with other theories, such as the TRA, TPB/DTPB and the MM. The analysis did not find that any of the hypothesized relationships were statistically moderated by gender. Contrary to the findings of Venkatesh et al. (2003), gender was not found to be a moderating force and H5, H6, H7 were not supported. This finding is similar to the ones suggested by Robinson (2006). Similarly, the impact of social influence on behavioral intention is not moderated by voluntariness of use according to the data in this research. However, the regression results show a significant positive, direct relationship between voluntariness of use and behavioral intention, which is consistent with motivations in the literature about distance learning (Kathawala, et al., 2002). The results presented in this chapter suggest that e-learning managers must take care of the quality of educational materials and processes in their educational programs, given that students enrolling on them have a strong focus on performance and results. Although in this study, effort expectancy is not a determinant of the intention of using the e-learning platform, it is also true that in this particular case most students perceived SERUDLAP as an effortless, easy-to-use system. That is to say, focus on usability of the system may not be a factor promoting more use, but the lack of attention to this important element may be
a factor reducing the intention to join a particular e-learning program. Additionally, as suggested above, the fact that social influence is not significantly related to the intention to use an e-learning system may be linked to its low market penetration. In the particular case of Mexico, this is maybe also linked to the low penetration of computers and Internet. Unfortunately, these facts are not in the direct control of program managers, and they require the design of public policies oriented to promote a digital culture in the country. Considering that the attitude towards technology has an impact on the intention to use technology, these policies become even more important. Efforts like the one initiated by the National Association of Universities and Higher Education Institutions have the potential to facilitate the diffusion and use of these technologies in education.
FUTURE RESEARCH DIRECTIONS The research reported in this chapter constitutes an initial approach to understand graduate students’ acceptance and use of distance education platforms and further research is needed in this particular area. Administrators of distance education programs may consider these preliminary findings in planning and implementing new technologies at higher education institutions. Future research may consider all variables initially included in the UTAUT (such as age, previous experiences, facilitating conditions, etc.) in order to better understand students’ behavior. On the other hand, the research reported here suggests that the context of open e-learning programs have also potential differences in terms of the context in which technology acceptance has been traditionally studied. For instance, technology acceptance studies are usually associated with systems adopted in a single organization, and students enrolled in programs like the ones
351
Understanding Graduate Students’ Intended Use of Distance Education Platforms
offered in SERUDLAP come from many different organizations. Finally, considering the diversity of distance education models, acceptance of technology and student satisfaction should also be explored involving variables related to learning techniques employed in each program and tools and techniques to create learning communities.
CONCLUSION The study examined a modified version of the Unified Theory of Acceptance and Use of Technology model and reveals some determinants that impact the behavioral intention of acceptance and use of an on-line educational system. Performance expectancy, attitude toward technology and voluntariness of use are factors that help us understand graduate students’ acceptance and use of a distance education platform, such as SERUDLAP system. These results suggest that distance education students have high expectations in terms of the skills and knowledge that they will learn in a distance education program, and that difficulties dealing with the system or learning platform is not a key determinant for them. However, it looks like they have a positive attitude towards technology. The results of this research suggest that in the case of distance education technologies, voluntariness of use has a direct positive effect on behavioral intention of using the system instead of a moderating effect on some of the relationships. In this study, the effort expectancy and the social influence were not determinants on the behavioral intention to accept and use the SERUDLAP system. Given that previous research shows contradictory results, the impacts of these two variables on behavioral intention need more exploration in the future. In this particular research, it makes sense that social influence has no significant impact because students do not belong to the same organization, thus there is not necessarily social pressure to adopt the system in the same
352
sense that was defined in UTAUT. On the other hand, effort expectancy may not be an important factor given that our sample only included people who already had adopted the system.
ACKNOWLEDGMENT The authors want to thank the valuable contributions of Jose Guillermo Cervantes and Angel Delgado Cruz in the development and administration of the survey.
REFERENCES Arbaugh, J. B., & Warell, S. S. (2009). Distance learning and Web-based instruction in management education. In Armstrong (Ed.), The Sage handbook of management learning, education and development (vol. 7, pp. 231-254). Bandura, A. (1986). Social foundations of thought and action: A social cognitive theory. Englewood Cliffs, NJ: Prentice Hall. Barrón, H. S. (2004). La educación en línea en México. Edutec: Revista electrónica de tecnología educativa, 18. Retrieved from http://dialnet. unirioja.es /servlet/articulo? codigo=1064579 Birch, A., & Irvine, V. (2009). Preservice teachers’ acceptance of ICT integration in the classroom: Applying the utaut model. Educational Media International, 46(4), 295–315.. doi:10.1080/09523980903387506 Boisier, S. (2005). ¿Hay espacio para el desarrollo local en la globalización? Revista de la CEPAL, 86, 47-62. Retrieved from http://dialnet.unirioja. es /servlet/articulo? codigo=1257248 Bosco, M. D., & Barrón, H. (2008). La educación a distancia en México: Narrativa de una historia silenciosa. Mexico City, Mexico: UNAM.
Understanding Graduate Students’ Intended Use of Distance Education Platforms
Cabero, J. (2006). Bases pedagógicas del elearning. Revista de Universidad y Sociedad del Conocimiento, 3(1), 1-10. Retrieved from http:// www.uoc.edu/rusc/ 3/1/dt/esp/cabero.pdf Castells, M. (2002). La era de la información: Economía sociedad y cultura. La sociedad red (4 ed.). México: Siglo XXI Ediciones. Compeau, D. R., & Higgins, C. A. (1995). Application of social cognitive theory to training for computer skills. Information Systems Research, 6(2), 118–143. doi:10.1287/isre.6.2.118 Davis, F. D., Bagozzi, R. P., & Warshaw, P. R. (1989). User acceptance of computer technology: A comparison of two theoretical models. Management Science, 35(8), 982–1003. doi:10.1287/ mnsc.35.8.982 Diaz, M. C., & Loraas, T. (2010). Learning new uses of technology while on an audit engagement: Contextualizing general models to advance pragmatic understanding. International Journal of Accounting Information Systems, 11(1), 61–77.. doi:10.1016/j.accinf.2009.05.001
Fishbein, M., & Ajzen, I. (1975). Belief, attitude, intention and behavior: An introduction to theory and research. Reading, MA: Addison-Wesley. Garrido Noguera, C., & Thirión, J. M. (2006). La educación virtual en México: Universidades y aprendizaje tecnológico. In Garrido Noguera, C. (Ed.), El uso de las tecnologías de comunicación e información en la educación superior. Experiencias internacionales (pp. 97–111). Mexico City, Mexico: ELAC. Gay, L. R., Mills, G. E., & Airasian, P. (2006). Educational research: Competencies for analysis and applications (8th ed.). Upper Saddle River, NJ: Pearson Prentice Hall. Hair, J. F. J., Anderson, R. E., Tatham, R. L., & Black, W. C. (1998). Multivariate data analysis. New Jersey: Prentice Hall. Herrera Corona, L., Mendoza Zaragoza, N. E., & Buenabad Arias, M. A. (2009). Educación a distancia: Una perspectiva emocional e interpersonal. Apertura, 9(10), 62–77.
Duyck, P., Pynoo, B., Devolder, P., Voet, T., Adang, L., & Vercruysse, J. (2008). User acceptance of a picture archiving and communication system. Applying the unified theory of acceptance and use of technology in a radiological setting. Methods of Information in Medicine, 47(2), 149–156.
Hsiu-Yuan, W., & Shwu-Huey, W. (2010). User acceptance of mobile internet based on the unified theory of acceptance and use of technology: Investigating the determinants and gender differences. Social Behavior & Personality: An International Journal, 38(3), 415–426. doi:10.2224/ sbp.2010.38.3.415
Enríquez Alvarez, A., Cortés Hernández, A. O., Ortiz Boza, A., Zavala Hernández, C., Gallardo Vallejo, C., & Bernal López, E. (2001). Diagnóstico de la educación superior a distancia. Mexico City: ANUIES.
INEGI. (2009). Usuarios de tecnologías de información, 2001 a 2009. Retrieved July 15, 2010, from http://www.inegi.org.mx /est/contenidos/espanol /soc/sis/sisept/ default.aspx?t =tinf204&s=est&c=5577
Eom, S. B., Wen, H. J., & Ashill, N. (2006). The determinants of students’ perceived learning outcomes and satisfaction in university online education: An empirical investigation. Decision Sciences Journal of Innovative Education, 4(2), 215–235. doi:10.1111/j.1540-4609.2006.00114.x
Isodynamic. (2001). E-learning. Retrieved October 13, 2009, from http://www.isodynamic.com /web/pdf/IsoDynamic _elearning_white _paper. pdf
353
Understanding Graduate Students’ Intended Use of Distance Education Platforms
Je Ho, C., & Park, M.-C. (2005). Mobile internet acceptance in Korea. Internet Research, 15(2), 125–140..doi:10.1108/10662240510590324 Kathawala, Y., Abdou, K., & Elmuti, D. S. (2002). The global MBA: A comparative assessment for its future. Journal of European Industrial Training, 26(1), 14–23..doi:10.1108/03090590210415867 Larson, R. C., & Murray, M. (2008). Distance learning as a tool for poverty reduction and economic development: A focus on China and Mexico. Journal of Science Education and Technology, 17(2), 175–196. doi:10.1007/s10956-007-9059-1 Lewis, P. A., & Price, S. (2007). Distance education and the integration of e-learning in a graduate program. Journal of Continuing Education in Nursing, 38(3), 139–143. Lu, J., Yu, C.-S., Liu, C., & Yao, J. E. (2003). Technology acceptance model for wireless Internet. Internet Research, 13(3), 206–222. doi:10.1108/10662240310478222 Moore, G. C., & Benbasat, I. (1996). Integrating diffusion of innovations and theory of reasoned action models to predict utilization of information technology by end-users. In Kautz, K., & Pries-Hege, J. (Eds.), Diffusion and adoption of information technology (pp. 132–146). London, UK: Chapman and Hall. Nunnally, J. C., & Bernstein, I. (1994). Psychometric theory (3rd ed.). New York, NY: McGraw Hill. Ortega Amieva, D. C. (2006). La asociación nacional de universidades e instituciones de educación superior y el uso de las tic en educación. In Garrido Noguera, C. (Ed.), El uso de las tecnologías de comunicación e información en la educación superior. Experiencias internacionales (pp. 73–82). Mexico City, Mexico: ELAC.
354
Pikkarainen, T., Pikkarainen, K., Karjaluoto, H., & Pahnila, S. (2004). Consumer acceptance of online banking: An extension of the technology acceptance model. Internet Research, 14(3), 224–235..doi:10.1108/10662240410542652 Robinson, J. L. (2006). Moving beyond adoption: Exploring the determinants of student intention to use technology. Marketing Education Review, 16(2), 79–88. Rubio Oca, J. (2006). La educación superior y la sociedad de la información en méxico. In Garrido Noguera, C. (Ed.), El uso de las tecnologías de comunicación e información en la educación superior. Experiencias internacionales (pp. 10–21). Mexico City, Mexico: ELAC. Torres Nabel, L. C. (2006). La educación a distancia en México: ¿Quién y cómo la hace? Apertura, 6(4), 74–89. Venkatesh, V., Morris, M. G., Davis, G. B., & Davis, F. D. (2003). User acceptance of information technology: Toward a unified view. Management Information Systems Quarterly, 27(3), 425–478. Williams, P. W. (2009). Assessing mobile learning effectiveness and acceptance. ProQuest Information & Learning, 69. Retrieved from http://search. ebscohost.com /login.aspx?direct =true&db=psyh &AN=2009-99090- 398&loginpage= CustLogin. asp?custid =s5776608&site= ehost-live
ADDITIONAL READING Ackerman, A. (2008). Blended Learning Ingredients: A Cooking Metaphor. Journal of Instruction Delivery Systems, 22(3), 21–25. Ackerman, A. (2008). Hybrid Learning in Higher Education: Engagement Strategies. College & University Media Review, 14(1), 145–158.
Understanding Graduate Students’ Intended Use of Distance Education Platforms
Amin, H. (2009). An analysis of online banking usage intentions: an extension of the technology acceptance model. International Journal of Business & Society, 10(1), 27–40. Artino, A. (2008). Motivational beliefs and perceptions of instructional quality: predicting satisfaction with online training. Journal of Computer Assisted Learning, 24(3), 260–270.. doi:10.1111/j.1365-2729.2007.00258.x Clouse, S., & Evans, G. (2003). Graduate Business Students Performance with Synchronous and Asynchronous Interaction e-Learning Methods. Decision Sciences Journal of Innovative Education, 1(2), 181–202..doi:10.1111/j.15404609.2003.00017.x Daneshgar, F., Toorn, C., & Abedin, B. (2009). A Research Theme for Investigating the Effectiveness of Collaborative e-Learning in Higher Education. [Retrieved from Education Research Complete database]. International Journal of Learning, 16(3), 373–383. Downes, S. 2006. E-learning 2.0. Retrieved October 13, 2009, from http://elearnmag.org /subpage. cfm?section= articles&article=29-1 Fusilier, M., Durlabhji, S., & Cucchi, A. (2008). An Investigation of the Integrated Model of User Technology Acceptance: Internet User Samples in Four Countries. Journal of Educational Computing Research, 38(2), 155–182. doi:10.2190/ EC.38.2.c Gallien, T., & Oomen-Early, J. (2008). Personalized Versus Collective Instructor Feedback in the Online Courseroom: Does Type of Feedback Affect Student Satisfaction, Academic Performance and Perceived Connectedness With the Instructor? International Journal on E-Learning, 7(3), 463–476.
Goldstein, J., & Puntambekar, S. (2004). The Brink of Change: Gender in Technology-Rich Collaborative Learning Environments. Journal of Science Education and Technology, 13(4), 505–522..doi:10.1007/s10956-004-1471-1 Isodynamic. (2001). E-Learning. Retrieved October 13, 2009, from http://www.isodynamic.com/ web/pdf/IsoDynamic_ elearning_white_paper.pdf Jerónimo, J. A. (2004). La educación a distancia en dos continentes. Retrieved October 13, 2009, from http://www.ateneonline.net /datos/58_04_Montes_ Jose_Antonio.pdf Lundvall, B.-A. (2002). The University in the Learning Economy. DRUID, 2. Retrieved October 13, 2009, from http://www.druid.dk/wp / pdf_files/02-06.pdf MacDonald, C., & Thompson, T. (2005). Structure, Content, Delivery, Service, and Outcomes: Quality e-Learning in higher education. International Review of Research in Open and Distance Learning, 6(2), 1–21. Matejka, D. (2004). Project-Based Learning in Online Postgraduate Education. Issues in Informing Science & Information Technology, 1, 489–496. Mckinnon, D., Nolan, P., & Sinclair, K. (2000). A longitudinal study of student attitudes toward computers: Resolving an attitude decay paradox. Journal of Research on Computing in Education, 32, 325–335. Prasolova-Førland, E. (2006). Distance learning: overview and design issues. IDI. Retrieved October 13, 2009, from http://www.idi.ntnu.no / emner/dif8914/essays /ekaterina-essay2000.pdf Price, B. (2009). Managing and Assessing Projects and Dissertations at a Distance with a Modified e-Learning Infrastructure. Proceedings of the International Conference on e-Learning, 427-431.
355
Understanding Graduate Students’ Intended Use of Distance Education Platforms
Thomas, P. (2009). Information systems success and technology acceptance within a government organization. Dissertation Abstracts International Section A, 70, Retrieved from PsycINFO database. Van Raaij, E. M., & Schepers, J. J. L. (2008). The Acceptance and Use of a Virtual Learning Environment in China. Computers & Education, 50(3), 838–852. doi:10.1016/j.compedu.2006.09.001 Velazquez, C. (2007). Testing Predictive Models of Technology Integration in Mexico and the United States. Computers in the Schools, 24(3/4), 153–173. doi:.doi:10.1300/J025v24n03-11 Venkatesh, V., & Xiaojun, Z. (2010). Unified Theory of Acceptance and Use of Technology: U.S. Vs. China. [Retrieved from Computers & Applied Sciences Complete database.]. Journal of Global Information Technology Management, 13(1), 5–27. Wang, Y.-S., Wu, M.-C., & Wang, H.-Y. (2009). Investigating the Determinants and Age and Gender Differences in the Acceptance of Mobile Learning. British Journal of Educational Technology, 40(1), 92–118. doi:10.1111/j.1467-8535.2007.00809.x Winter, J., Cotton, D., Gavin, J., & Yorke, J. (2010). Effective e-learning? Multi-tasking, distractions and boundary management by graduate students in an online environment. ALT-J: Research in Learning Technology, 18(1), 71–83.. doi:10.1080/09687761003657598
356
KEY TERMS AND DEFINITIONS Attitude Toward Using Technology: An individual’s overall affective reaction to use a system. Behavioral Intention: The degree to which an individual wishes to use a system. E-Learning: Delivered education by electronic technology, from computers to internet applications. Effort Expectancy: The level of simplicity associated with the use of the system. Performance Expectancy: The degree to which an individual believes that using the technology will help him/her in performance gains. Social Influence: The extent in which an individual perceives that important others believe that he/she should use the system. Voluntariness of Use: The level in which an individual would like to use the system.
ENDNOTE 1
Original survey was carried out in Spanish, and is available upon request.
357
Chapter 16
Online Project-Based Learning: Students’ Views, Concerns and Suggestions Erman Yukselturk Middle East Technical University, Turkey Meltem Huri Baturay Kırıkkale University, Turkey
ABSTRACT This study integrated Project-based learning (PBL) in an online environment and aimed to investigate critical issues, dynamics, and challenges related to PBL from 49 student perspectives in an online course. The effect of PBL was examined qualitatively with open-ended questionnaire, observations, and students’ submissions who were taking an online certificate course. According to the findings, students thought that an online PBL course supports their professional development with provision of practical knowledge, enhanced project development skill, self confidence, and research capability. This support is further augmented with the facilities of the online learning environment. Students mainly preferred team-work over individual work. Although students were mostly satisfied with the course, they still had some suggestions for prospective students and instructors. The findings are particularly important for those people who are planning to organize course or activities which involve online PBL and who are about take an online or face-to-face PBL course.
INTRODUCTION Learning is enhanced when students are actively involved in it; when assignments reflect reallife contexts and experiences; and when critical thinking or deep learning is promoted through reflective and applied activities (Smart & Cappel, 2006). Online courses have the potential to create DOI: 10.4018/978-1-60960-615-2.ch016
environments where students have the advantage of learning by doing in real-life contexts. Even though online learning has the potential to offer flexible and individualized multimedia supported real-life contexts, without a learning strategy its effect on learning would be just a media enriched touch. Implying the importance of pedagogical input into the design and delivery of online learning, Jasinski (1998) stated that online technologies in the form of instructional medium would not
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
Online Project-Based Learning
in themselves improve or cause change in learning. Online learning environments have many capabilities and opportunities for teachers and learners; whereas, what improves learning truly is a well-designed instruction. Therefore; how learning activities are rendered via technology is significant; learning activities should be the focus when designing and analyzing learning through various technologies (Tolsby, Nyvang & Dirckinck-Holmfeld, 2002). The general tendency is to implement online learning with the traditional approaches. To their analysis of 436 randomly chosen educational websites, Mioduser et al. (2000) found out that most of these online courses reflected traditional approaches commonly found in textbooks and CD ROM multimedia. In fact, it’s the instructor’s responsibility to create a learning environment which will allow students to construct knowledge by interacting with their environments (Hill, 1997). Whether it is a traditional or an online learning environment, recent recommendations about education reform implies a new concept of teaching and learning heeding student’s centrality, autonomy and awareness in the learning process (Karaman & Celik, 2008). Based on constructivism, project-based learning, called PBL here after, is one of those innovative approaches that provide students with the opportunity to work autonomously, collaboratively in realistic contexts while they are designing a project, problem solving or decision making. PBL has been used in various settings: in traditional classrooms, online courses, K-12 and higher education. Despite the popularity of PBL, further research studies are needed for sufficient understanding on how PBL method guides learners in constructing collaborative projects within the online learning community (Lou& MacGregor, 2004; Rooij, 2009; Thomas & MacGregor, 2005). Therefore, this study provides descriptions of the learning dynamics within an online course, and investigates how students explore ways to use web-based tools that help them communicate and
358
construct projects over time. The main objective is to investigate the participants’ learning experiences with the project method using the webbased collaborative technologies. Such research is expected to contribute to the improvement of pedagogy in the design and use of project-based learning in online courses.
BACKGROUND Project-Based Learning Project-based learning has lately proved to be an effective teaching and learning strategy that is becoming used more often in many classrooms. The theoretical foundations for PBL are proved early in the last century by Dewey’s (1938) experiential learning which implies the importance of practical experience in learning. Through PBL learners learn concepts, interpret them and construct meaning while interacting with others and with their surroundings. It engages students in finding solutions to challenging reallife questions or problems that entail making predictions, problem-solving, decision making and data analyzing through a process of investigation and collaboration instead of solving well-defined problems prescribed by the teacher (Krajcik et al., 1999; Doppelt, 2005). In their study exploring prospective teachers and their students’ perceptions regarding PBL intervention Karaman and Celik (2008) found that besides gaining life-long learning skills and knowledge, students felt that they could overcome challenges confronted in their real lives after dealing with tiresome, stressful projects. PBL helps learners acquire knowledge and develop long-term learning ability, intellectual abilities, interpersonal skills and professionalism (Frank et al., 2003; Chartier & Gibson, 2007; Wang et al., 2005). Underpinned with constructivist learning approach and due to the focus on collaboration for problem-solving, this model of learning is often confused with problem-based
Online Project-Based Learning
learning. Project and problem based learning are both based on self-direction and collaboration, and they both have a multidisciplinary orientation. Whereas, there are differences between them such as: (a) the duration that project tasks take a longer period of time than problem-based learning problems ranging from a single session up to weeks or a whole year, (b) project work is more directed to the application of knowledge, whereas, in problem-based learning there is no previously received formal instruction; hence, it is more directed to the acquisition of knowledge (Perrenet, Bouhuijs, & Smits, 2000; Rooij, 2009), (c) project-based learning adopts a production model; whereas, problem-based learning adopts an inquiry model, which regards one problem as its driving force (Lee & Tsai, 2004). Within PBL, learning is organized around projects. A project must have the following five criteria to be considered to be a product of PBL and to maximize students’ orientation toward learning and mastery: • •
•
•
•
Centrality: The projects are central not peripheral to the curriculum. A driving question: The projects are focused on questions or problems that make students struggle with the central concepts and principles of a discipline. Constructive investigations: The central activities of the project must involve transformation and construction of knowledge on the student side. Autonomy: PBL projects, not being teacher-led, incorporate more student autonomy, choice, unsupervised work time, and responsibility than traditional instruction and traditional projects. Realism: The projects expose the students with authentic (not simulated) problems and questions while the solutions or the end-products have the potential to be implemented (Thomas, 2000: 3-4).
Moreover, projects should focus on the application, and possibly on the integration of previously acquired knowledge. The projects might be carried out as individuals or in small groups in which students work with a project team of teachers who are advisers and consultants (Mills & Treagust, 2003). In PBL, learning-appropriate goals should be provided to create a need for students and to understand the how and why of a project; besides, the projects should provide frequent opportunities for formative assessment (Barron et al., 1998). Moreover, a master-apprentice kind of relationship is suggested to be used for the teaching-learning situation in which teachers should scaffold instruction by breaking down tasks; use modeling, prompting, and coaching to teach strategies for thinking and problem solving; and gradually release responsibility to the learner for meaningful learning of them through student-directed investigation (Blumenfeld et al., 1991).
Online Project-Based Learning There has been enhanced enthusiasm for approaches to instruction emphasizing the connection of knowledge to the contexts of its application (Barron et al., 1998) such as PBL. Currently, this enthusiasm has been renewed with the use of information technologies to foster active and student-centered learning in real-world settings (Land & Greene, 2000; Barak & Dori, 2004). Computer supported project based science makes the environment more authentic to students because the computer provides access to piles of data and information, expands interaction and collaboration with others via networks, promotes laboratory investigation, and offers tools that experts use to produce artifacts (Krajcik et al., 1994). Particularly, the Internet today supplies a useful platform to implement cooperative learning and enables learners to overcome the limits of classroom space and teaching time (Lee & Tsai, 2004). Thus, small groups of students can easily
359
Online Project-Based Learning
collaborate on a project in the online learning environment (Lou & MacGregor, 2004). However, there is very little literature that provides practical examples of how fully online courses can be structured based on PBL; which interpersonal dynamics exist in the distance education environment and what students’ points of view are regarding the obstacles they face (Hall, 2007; Land & Greene, 2000). Some of those few studies have indicated that students engaged in project-based online learning (a) generated and applied knowledge (Rooij, 2009), (b) gained deeper content knowledge and higher level problem-solving skills through held discussions (Wang, Pool, Harris & Wangemann, 2001), (c) acquired the special skills, including an understanding of human dynamics across functional and cultural boundaries (Duarte & Tennat Snyder, 2001). To summarize, this study investigated critical issues, dynamics and challenges related to project based learning from the student perspective in an online course. Four main research questions guided this investigation: • • •
•
What are the students’ views about an online PBL course? What are the students’ views about PBL? What are the students’ views about team and individual work in an online PBL course? What are the students’ suggestions for future students and instructors of an online PBL course?
METHOD OF THE STUDY This study is qualitative in nature. As defined by Marshall and Rossman (1999) qualitative research involves with complexity of social interactions as expressed in daily life and with the meanings the participants themselves attribute to these interactions. The present study, thus, is pragmatic,
360
interpretive and grounded in the experiences of project-based students in an online PBL environment.
Participants The participants of this study were chosen from an online project based course which was offered in the Online Information Technologies Certificate Program, Middle East Technical University, Ankara, Turkey (April 2009 - July 2009). Originally, 69 students were registered to the course; however this study included 49 students who volunteered to participate in the study. The number of male students (63.4%) was greater than the number of female students (36.6%), and the students’ ages ranged from 19 to 45 with an average of 27.1 years. 54.8% of the online course students were undergraduate or graduate students. More than half of the students (60.4%) did not have full-time jobs.
Context of the Study The Online Information Technologies Certificate Program (ITCP) is one of the first Internet Based Education Projects of the Middle East Technical University in Ankara, Turkey. It includes eight fundamental courses of the Computer Engineering Department. The program is based on synchronous and asynchronous communication methods over the Internet and comprises of four semesters lasting nine months in total. The courses in the program are Computer Systems and Structures, Introduction to Computer Programming with C, Data Structure and Algorithms with C, Operating Systems with Unix, Software Engineering, Database Management Systems, Web Programming, and Software Development Project. The main aim of this online program is to train participants in the IT field. Furthermore, it provides opportunities for people who would like to improve themselves in advanced IT area and desire to make progress in their existing career. The program, based on Moodle LMS, provides online lecture notes,
Online Project-Based Learning
Table 1. The timetable of project phases Weeks Course Requirements
1
2
Preparing homepages
X
X
Project proposal Analysis
X
3
4
5
X
X
6
7
X
X
Design Test
learning activities and visual aids. One instructor and two assistants are assigned for each course (Isler, 1997). The Software Development Project is one of the courses of this online program. Students take it at the end of their program to apply their theoretical knowledge into practical problems. The main aim of this course is to develop a software project. This project can be developed alone or in groups of up to four members. Software development projects in this online course are divided into the following phases: project proposal, analysis, design, implementation, and test. These phases consist of requirements that students need to prepare during the course.
•
10
X
X
X
11
12
X
Presentation
•
9
X
Implementation
•
8
Project proposal: This phase consists of six parts: project’s aim, definition, scope, methods and software tools that will be used, milestones, plan, and project calendar. Analysis: This phase consists of components to answer ‘what’ questions in the projects: the project description, requirements analysis, and architecture context diagram. Design: This phase consists of components to answer ‘how’ questions in the projects: scope of the project, system design, data design, collaboration diagram, class diagram, and interface design.
X
•
•
•
Implementation: This phase consists of four parts: the problems that students have faced with and the solutions, software components and their functions, screenshots, and user guide. Test: This phase consists of four parts: test plan, software parts that students have tested and procedures that students have used, and test results. Presentation: In this last phase, students present their projects to the course instructors and their fellows in a face to face session at the end of the course.
Each project phase is prepared by the students timely. The timetable of project phases shown in Table 1 is given at the course web site. Students are evaluated based on their project phases and the effects of each phase to their course achievement are given in Table 2. Also, students have to complete each project phase to pass the course.
Procedure The course, software development project, was offered to the participants at the last semester of the online certificate program. Before this course, students attended several courses related to programming languages, database management systems, operating systems in previous semesters.
361
Online Project-Based Learning
Table 2. Evaluation criteria in the course Criteria
Percentage/100
Project proposal
5
Analysis
25
Design
25
Implementation
25
Test
10
Presentation
10
In this online course, students learn the processes of developing software project from the proposal phase to the presentation. At the beginning of the semester, there was a on-ground face-to-face session at the campus of the university. During the face-to-face session lasting two hours, students met with their classmates and instructors. The instructors explained the course structure and requirements thoroughly. The possible project topics and some projects that had been prepared previous year were discussed in this face-to-face course. Moreover, the course instructors mentioned major project topics that might be prepared: dynamic web based applications, database applications, game and simulations, e-business, and expert systems. The instructor announced that at least one programming language (e.g. Java, VBasic, PHP, C#), and database application (e.g. my SQL, MS Access) should be used while preparing projects. Students were also expected to prepare an interface (e.g. a web page) or executable interface to present their projects. After the first face-to-face session students attended this course online about three months. At the first week of the course, the instructors created web accounts that could be used for the students while preparing their projects. As a first requirement of the course, students had to prepare their simple home pages that consisted of their personal information and interests and then upload them to their web accounts to share with their classmates. During the online course, instructors used e-mail for individual communication, dis-
362
cussion forum for asynchronous communication and chat sessions for synchronous communication to interact with their students. With the help of discussion forum, students were asked to write their possible project topics that they wanted to prepare. These projects could be developed alone or in groups of up to four members in this online course. Group works were recommended to the students but it was not compulsory. 48 students decided to form 19 different groups with two, three or four members to prepare their project and 21 students decided to prepare their projects individually. 40 different projects were proposed totally in the course. After deciding on the project topics and forming groups or not, students prepared their project phases regularly until the end of the course: project proposal, analysis, design, implementation, and test. Before each phase, synchronous chat sessions were conducted among students and instructors to give more information about phases’ documentations. Moreover, after each phase students’ works were evaluated and they were each given feedback individually. The discussion forum was used mainly to guide the students and also students were able to ask specific questions to the instructors via e-mail until the end of the course. At the end of the semester, students came to the campus of the university to present their projects to the instructors and their classmates. To pass this course, students completed each project phase and presented their projects.
Data Collection and Analysis The qualitative paradigm was used for data collection and analysis process in the current study. Pratt (2009) stated that qualitative research is great for addressing “how” questions—rather than “how many” and particularly for understanding the world from the perspective of those studied. A case study approach, which has been applied in the current study, is advantageous when “why” and “how” questions are being asked and it is
Online Project-Based Learning
recommended when the investigator believes that the contextual conditions are highly relevant to the phenomenon under study (Yin, 1994). To clarify case study method is useful to understand a particular situation, and a course in depth, such as the online PBL course in this study. The sample consisted of students who attended online PBL course; therefore, convenience sampling has been used. Pratt (2009) stated that one should be very clear about one’s “position in the field”: the relationship between the researcher and the researched. In this study, one researcher who observed the students was the instructor of the online PBL course. Besides, two researchers worked as ‘multiple coders’ and analyzed the collected data individually for inter-rater reliability as suggested by Pratt (2009). Data were collected through a questionnaire, observations and students’ submissions. The researchers prepared 11 open-ended questions for the questionnaire. The course instructors and an expert from instructional technology department revised the questions and corrected one by one for the validity. At the end of the course, the questionnaire was sent to the students who prepared their projects as a group work or individual. The major questions in the questionnaire were indicated in Appendix A. Also, students’ contributions for their projects were analyzed and they were observed while attending the course discussions and presentations. Structure of the course was not changed and researchers did not affect the students during the study. After the data was collected, the data analysis process started which was based on qualitative paradigm (Marshall & Rossman, 1999) including iterative cycles of examining the patterns and ideas. Then, the researchers explored similarities and differences among the students’ views from the collected data. Later, the general themes were identified and the researchers searched for confirming and disconfirming evidences about these themes that would be incorporated into the conclusions. Therefore, the data was presented
in the form of tables and as quotations in the body of the paper, that is, the data was presented in the form of ‘power quotes’ and ‘proof quotes’ as explained by Pratt (2009). The qualitative responses were analyzed by means of transcribed text directed data reduction, data display, and conclusion drawing/verification phase based on content analysis. After content analysis, themes obtained from the questionnaire responses were subject to frequency analysis in the form of tables supported with quotations. While drawing conclusions, observation and each student’s contribution to the projects were revised for data verification.
Findings Results derived from the written responses were analyzed and categorized into four different groups of themes.
Students’ Views about the Online PBL Course Students’ views about the online PBL course gathered around two main themes with sub-themes as indicated in Table 3. Most of the students were satisfied with the online course and thought that the course was beneficial for them in many ways. An overwhelming majority of these students’ responses to the questions pointed out time and place flexibility and chat & forum availability as the strength of the online course. One student reported that, “Its online availability enabled me to participate in the course which I normally wouldn’t due to its physical distance and due to the time adequateness I have.” Similarly, another learner commented that, “… with the online course, it is a great advantage for intensive workers to study not at a specific time but within the required time…” Again the students mostly stated that the chat and forum availability of the course contributed great to their learning. Emphasizing the importance of forum, a student said, “At anytime one could ask
363
Online Project-Based Learning
Table 3. Students’ views about the online PBL course Themes
Frequency
Table 4. Students’ views about project based learning Themes
Frequency
Students’ acquisitions from PBL
Strengths of the online PBL course Time and place flexibility
27
Gained practical knowledge for business/real/IT life
35
Chat & forum availability (mainly forum)
23
22
Course design and management
15
More effective learning through searching for knowledge Enhanced project development and management
19
More retained knowledge
12 10 8
Deficiencies of the online PBL course Inadequateness of visuals aids and images in content
11
Inadequateness of examples, practice in content
11
Enhanced self-confidence in software project development
Lack of resources
9
Improved research capability
Lack of face-to-face interaction throughout the semester
8
Problems that students faced while preparing their projects
Requirements of the course (too much demanding work)
6
Inefficient management of chat sessions and dedication of inappropriate time for chats
5
a question and with other peers’ questions and sent messages our learning enhances.” Supporting this view, another student stated, “I have learned a lot from my instructors and my friends via chats and forum. I thank my instructors for replying our e-mails rapidly and for their help.” In parallel to these, generally students had positive opinions about online PBL course design and management. Students’ negative views regarding the course were written under the theme “deficiencies of the online PBL course”. Students mostly emphasized the inadequateness of examples, practices, visual aids and images in the course content. Pointing out the need of increasing examples in content, one student reported that, “…I think the more the examples; the more a student understands [the course].” Regarding the inadequateness of visuals, one respondent reported that, “…I think that visual aids and technologies (video, flash, etc.) should be much more placed [in the content].” Similarly, students often complained about inadequateness of resources in the course notes. In addition, some others complained about deficiency of face-to-face interaction in online learning, heavy workload of the course and inconveniency of chat sessions.
364
Time management problems
37
Low entry level knowledge on project requirements
27
Team-based problems (Inadequate interaction, less active members, organization problems)
13
Choosing of over extended project topics
6
Students’ Views about Project Based Learning Students’ responses regarding PBL gathered around students’ acquisitions and problems while preparing their projects as indicated in Table 4. The first theme composed of well-known properties of PBL, such as, gained practical knowledge, project development management and effective learning through searching. The second theme composed of several problems that students faced particularly time management, low entry level knowledge and team-based problems. Students felt that PBL practice helped them to better understand course items they previously learnt and to be aware of their capabilities in software project development, to practice what they learned and to produce something, a product, at the end of their projects. Clarifying these views, a student stated that “This project-based course was beneficial to practice our learning gains and produce something.” Students particularly indicated that PBL enabled them to learn effectively while searching for their project topics. Regarding
Online Project-Based Learning
the course one student reported that “The advantage of the PBL based course was that while working on projects in order to find solutions to problems we had to do more research and so we learned more effectively.” In addition, as students reported, they developed their project management skills as well while they were developing their projects. One student explained “A project-based course is a practice for the students who will take part in business life. Not only at software development but in every sector, we will face with project concept…In short, this course was the most beneficial one through this certificate program.” Similarly, another student pointed out that, “Besides developing a product in this course, we experienced which steps are followed in real project development… After completing the course I will try to work in the field of IT which I have always followed and with the project development experience I have had, I will try to be in the field of IT world and not be a spectator anymore…” Besides all, PBL contributed to students’ selfconfidence in software project development. One student reported that “I had the self confidence that I could do it and after all I can participate in software projects more confidently.” Another student talked about how they improved their research capability “The capability which I did not expect to improve was doing search on the Internet and I understood that I could learn everything by searching.” The students, additionally, talked about the problems they faced in the project development process. An overwhelming number of these students complained about time management problem mostly emanated from their intensive work load due to being a student or working. Regarding this problem, one student explained that “The biggest problem of us was of course time constraint, some of our friends were attending higher education in the meantime and some others were struggling with their intensive work load in their business life. We did not allocate adequate
time to the projects. If we had had more time, we would have developed a better project.” Another major problem that students faced was low entry level knowledge on project requirements. One student stated that “We had to learn brand new things while doing our projects, for instance no one had ever used PHP language before. Although this situation is very informative, it put extra pressure on us and caused us to move slowly, sometimes we submitted the required forms late.” Moreover, choosing an extended project topic, interaction problems in teamwork or being alone at individual work hindered students’ completion of their projects in due time. One student complained about team work and interaction problem and said that, “…we have suffered from team work. Because I could not see my friend so often there were times I had tried to finish his parts that my friend had to do.”
Students’ Views about Team and Individual Work in the Online PBL Course Students had both positive and negative views regarding working on projects in teams and individually. Their views were grouped into several sub-themes as indicated in Table 5. When students were asked if you were to take this course again, would you prefer an individual work or team work, the most of them responded that they would choose team work next time (Students who preferred team work was 25, individual work was 14 in number at a future project). In addition, as they commented, an efficient team work necessitates dividing work load effectively in team members, benefitting from members’ expertise at different issues and regular interaction among members. Most of the students who preferred team work wanted to have the advantage of less work with divided workload and various expertises or point of views of other members which would facilitate
365
Online Project-Based Learning
Table 5. Students’views about team and individual work in the online PBL course Themes
Frequency
Characteristics of an efficient team work Division of workload
17
Benefiting from different expertise
12
Regular online and face-to-face interaction
10
Working with limited number of members
2
Motives to prefer team work Less work with divided workload
19
Need of various expertise, experiences and point of views
19
Share knowledge and learn from peers
16
To work with familiar peers
14
To produce a better and comprehensive project
10
Less time dedicated to project
6
Types of labor division in team work Shared alike
16
Some had more workload
14
Studied together
9
Motives to prefer individual work Eagerness for self-study
16
Probable low interaction among team members
14
Loss of time and organization problems
12
Lack of familiar and close members
10
Not allocated time for team work due to being too busy
6
their project work and broaden their minds through project development process and enable them to learn from their peers. One student stated that “Team work was advantageous in that it reduced our work load in the project to reach a better result… I think team work positively affected us since it enabled sharing of knowledge in team members each of whom had different know-how.” Another student reported that, “The most important advantage of team work is to have friends with different expertise. I learned many things from my friends; at some occasions I, as well, transferred my experiences and ideas to them.” Another student explained why they preferred
366
teamwork, “Because we believed that we could implement an extensive and challenging project, we preferred teamwork.” Besides, students preferred to work with the peers who lived in the same city, worked at the same work place or studied at the same university. Students mostly preferred taking and/or dividing equal roles in their team projects. But somehow in some teams some students had more workload than other members. Moreover, as students reported, leading motives for their preferences of individual work were eagerness for self study (learning by doing everything individually, the willing to follow their own schedules), possible low interaction among team members, loss of time and organization problems. Supporting these views, one respondent explained that “The reason why I preferred individual work was that I would have arranged a flexible working schedule. The advantage of individual work is that work progresses faster and in a more organized fashion. Not being dependent on someone, the probability of encountering bad surprises concerning planned work and schedules decreases.”
Students’ Suggestions for Future Students and Instructors of an Online PBL Course The last theme extracted from students’ writings involved suggestions for future students and instructors while attending online PBL courses in Table 6. Students stated several suggestions to be successful in a PBL online course. Students, especially, emphasized the importance of being organized, scheduled studying; submitting course assignments regularly and/or in time and learning by doing and owned research capability. For example, one student suggested that “Participants of the course, should follow the course seriously, should prepare a schedule for their responsibilities in their lives such as work, journey, exam etc., this would help them for time management… In
Online Project-Based Learning
Table 6. Students’ suggestions for future students and instructors of an online PBL course Themes
Frequency
How to be successful in an online PBL course Organized and scheduled studying
24
Submitting course assignments regularly and/ or in time
19
Need of learning by doing and research capability
19
Monitoring and participating in chats, forum, and face-to-face sessions
16
More dedicated time
14
Working hard (sacrificing private life, sleep)
13
Creating teams by assigning familiar and nearby team members
9
Revising previous students’ projects
6
Identification of a good project subject and scope
6
Recommendations for instructors of an online PBL course Provide further individual guidance
19
Provide updated examples, applications and a visually enriched course
14
Give abundant time for project completion
9
Encourage students
6
Inform students of online learning and PBL
4
order to study effectively, they should regularly follow the course, shouldn’t miss chats and do all assignments. Provided that they do all, I believe they will be able to complete the program without any problem.” Emphasizing the importance of assignments another student stated that, “To me, pivotal point of the course is to do the assignments since this course is not a course studied through books but it requires practice…one should follow course handouts, do assignments, do practice, search for incoherent items, monitor forums.” Other major suggestions for future students were related to more dedicated time and working hard. One student reported that “In order to be successful in this course, at first you should dedicate adequate time, also you should study more than other courses, sometimes you have to sleep less, sometimes you will delay your hobbies, in other words, you will study a lot.”
Students also had recommendations for the course instructors, such as, increasing individual guidance; putting of more visuals and examples into course content and assigning extended time for project completions. Students require individual guidance from their instructors for particularly challenging parts in their projects. They think that they are able to finish their projects more effectively and rapidly in this way. One student mentioned that “I wish I had had more guidance from my instructors while I was doing my projects.” Regarding the course notes, one student pointed out that, “The examples and applications in the course content should be increased in number. There are many project topics for all tastes. The content should be designed to help all project topics, since it takes a lot of time to find the right document on the Internet.” Moreover, another student pointed out the need of an extended project completion time, “Course could have been started at an earlier time, by giving more time to participants more effective studying on their projects could have been provided.”
CONCLUSION AND FUTURE RESEARCH DIRECTIONS This study analyzed critical issues, dynamics and challenges regarding project based learning, one of the most popular teaching and learning strategy recently, from the student perspective in an online course. The results focused on four main themes of students’ views of an online PBL course, the strategy PBL, team and individual work in PBL and future suggestions. Students’ views regarding the online PBL course pointed out very well known characteristics of online courses such as time and place flexibility and the benefits of synchronous and asynchronous communication tools for cooperative learning in this learning environment. These type of courses in online environments have become a common option for learners who especially can not attend
367
Online Project-Based Learning
face-to-face learning in higher education (Moore & Kearsley, 2005). Besides, students indicated the need of more visuals and examples for practice as an inadequateness of the course content. They longed for more supplementary resources and faceto-face interaction and less course requirements to be successful from the course. It is suggested by Hafner and Ellis (2004) that the PBL environment, based on constructivist principles, should provide a firm foundation for scaffolding learning through coaching besides modeling. Students’ views regarding PBL signified the foremost characteristics of PBL such as gained practical knowledge for business life, learning while doing searching for knowledge, enhanced experience and self confidence in software project development and management and improved research capability. These findings are also substantiated in the literature. Several research papers on PBL have published these similar advantages of PBL method for students (e.g. Chartier & Gibson, 2007; Frank et al., 2003; Rooij, 2009; Wang et al., 2005). In addition to the benefits of PBL, researchers also mentioned some of the problems students’ faced while preparing their projects such as time management and teambased problems which were specific to the PBL implementation and which could be overcome by necessary arrangements. As findings indicated, students’ low entry level knowledge on project requirements should be noticed and their choice of over-extended project topics should be prevented by instructors while assigning projects to students. Pointed out in some other studies, these concerns of students regarding designing projects and planning procedures such as complex project topics and inadequate background knowledge obstruct making sense of the project (Karaman & Celik, 2008; Thomas, 2000). PBL provides both individual and especially team work opportunities in learning environments. Students pointed out that an efficient team work could only be carried out by dividing project workload effectively among team members and
368
by exploiting various expertises of team members. Besides, the effectiveness of team work could be enhanced with regular interaction while working with limited number of members. Similarly, Johnson et al. (1994) described the following elements as promoters of collaborative work: individual accountability, equal participation and interaction, use of social skills, group processing discussions, and positive interdependence. The results showed that the following characteristics also emerged to be the motives to prefer team work; divided workload and less time dedicated to the project, different expertise of members contributing to project’s success and comprehensiveness. Moreover, students had a chance to share and exchange knowledge with other peers in a team work. Researchers indicated an effective learning takes place when students work in teams, verbalize their thoughts, challenge the ideas of others, and work collaboratively to find solutions to problems (Johnson, et al 1994; Johnson, et al., 2000). On the other hand, some students who preferred individual work explained loss of time, organization problems and probable low interaction among members as motives to prefer individual work. These students also emphasized the importance and benefits of self-study. These findings indicated that there are advantages to encourage learners to team work in online project-based courses; however, this should not be obligatory particularly for adult learners who insist on individual work. The effectiveness of PBL as an instructional method may depend on several factors. There are important responsibilities of instructors and students in a project development process and at a PBL course (Frank et al., 2003; Karaman & Celik, 2008; Thomas, 2000). There are significant and valuable suggestions given in this study. Some of these suggestions are: one should be organized, submit assignments regularly, have research capability, monitor and participate in chats, forum and face-to-face sessions, dedicate adequate time, work hard, revise previous projects, choose a good project topic and work with familiar
Online Project-Based Learning
and nearby friends to be successful in an online PBL course. To them, instructors should as well provide individual guidance as much as possible, embed updated examples, applications and visuals into the course, encourage and inform students of online learning and PBL by giving abundant time for project completion to carry out a more effective course. It is thought that these suggestions will work as guiding principles for some other instructors or students who are planning to give or take a project-based course. As a conclusion, the findings and discussion of this present study has several pedagogical, theoretical and some practical implications for online and project-based learning. Although this study has some limitations that constrain generalization of the results, such as, small sample consisting of 49 students gathered from one online PBL course, the findings can direct our attention to some important design and implementation principles that need to be taken into consideration regarding an online and project-based learning course. The outcomes of this study might be made use of by other instructors and course coordinators as the basic points to be kept in mind to render an effective online PBL course.
REFERENCES Barak, M., & Dori, Y. J. (2004). Enhancing undergraduate students chemistry understanding through project-based learning in an IT environment. Science Education, 89(1), 117–139. doi:10.1002/sce.20027 Barron, J. S., Schwartz, D. L., Vye, N. L., Moore, A., Petrosino, A., & Zech, L. (1998). Doing with understanding: Lessons from research on problemand project-based learning. Journal of the Learning Sciences, 7(3&4), 271–311. doi:10.1207/ s15327809jls0703&4_2
Bereiter, C., & Scardamalia, M. (2003). Learning to work creatively with knowledge. In Corte, E. D., Verschaffel, L., Entwistle, N., & Merriënboer, J. V. (Eds.), Powerful learning environments: Unravelling basic components and dimensions (pp. 73–78). Oxford, UK: Elsevier Science. Blumenfeld, P., Soloway, E., Marx, R., Krajcik, J., Guzdial, M., & Palincsar, A. (1991). Motivating project-based learning: Sustaining the doing, supporting the learning. Educational Psychologist, 26(3&4), 369–398. doi:10.1207/ s15326985ep2603&4_8 Chartier, B., & Gibson, B. (2007). Project-based learning: A search and rescue UAV – perceptions of an undergraduate engineering design team: A preliminary study. Proceedings of the 2007 AaeE Conference, Melbourne. Dewey, J. (1938). Experience and education. New York, NY: Macmillan. Doppelt, Y. (2007). Assessing creative thinking in design-based learning. International Journal of Technology and Design Education, 19(1), 55–65. doi:10.1007/s10798-006-9008-y Duarte, D., & Tennant Snyder, N. (2001). Mastering virtual teams (2nd ed.). San Francisco, CA: Jossey-Bass. Frank, M., Lavy, I., & Elata, D. (2003). Implementing the project-based learning approach in an academic engineering course. International Journal of Technology and Design Education, 13, 273–288. doi:10.1023/A:1026192113732 Hafner, W., & Ellis, T. J. (2004). Project-based, asynchronous collaborative learning. Proceedings of the 37th Hawaii International Conference on System Sciences. Piscataway, NJ: IEEE.
369
Online Project-Based Learning
Hall, A. (2007). Vygotsky goes online: Learning design from a socio-cultural perspective. Learning and Socio-cultural Theory: Exploring Modern Vygotskian Perspectives International Workshop 2007. Retrieved June 6, 2009, from http://ro.uow. edu.au/ llrg/vol1/iss1/6 Hill, A. M. (1997). Reconstructionism in technology education. International Journal of Technology and Design Education, 7(1–2), 121–139. doi:10.1023/A:1008856902644 Isler, V. (1997). Sanal Universite. Paper presented at Inet-tr’97: Turkiye Internet Konferansi, Ankara, Turkey. Jasinski, M. (1998). Teaching and learning styles that facilitate on line learning: Documentation project. Project Report, Douglas Mawson Institute of TAFE. Johnson, D. W., Johnson, R. T., & Holubec, E. J. (1994). Cooperative learning in the classroom. Alexandria, VA: Association for Supervision and Curriculum Development. Johnson, D. W., Johnson, R. T., & Stanne, M. B. (2000). Cooperative learning methods: A meta analysis. The Cooperative Learning Center at the University of Minnesota. Karaman, S., & Celik, S. (2008). An exploratory study on the perspectives of prospective computer teachers following project-based learning. International Journal of Technology and Design Education, 18(2), 203–215. Krajcik, J., Czerniak, C., & Berger, C. (1999). Teaching science: A project-based approach. New York, NY: McGraw-Hill College. Krajcik, J. S., Blumenfeld, P. C., Marx, R. W., & Soloway, E. (1994). A collaborative model for helping middle-grade science teachers learn project-based instruction. The Elementary School Journal, 94(5), 483–497. doi:10.1086/461779
370
Land, S. M., & Greene, B. A. (2000). Project-based learning with the World Wide Web: A qualitative study of resource integration. ETR&D, 48(1), 45–68. doi:10.1007/BF02313485 Lee, C. I., & Tsai, F. Y. (2004). Internet projectbased learning environment: The effects of thinking styles on learning transfer. Journal of Computer Assisted Learning, 20(1), 31–39. doi:10.1111/j.1365-2729.2004.00063.x Lou, Y., & MacGregor, S. K. (2004). Enhancing project-based learning through online betweengroup collaboration. Educational Research and Evaluation, 10(4-6), 419–440. doi:10.1080/1380 3610512331383509 Marshall, C., & Rossmann, G. B. (1999). Designing qualitative research (3rd ed.). Thousand Oaks, CA: Sage Publications. Mills, J. E., & Treagust, D. F. (2003). Engineering education – is problem-based or project-based learning the answer? Australian Journal of Engineering Education, 4. Retrieved June 6, 2009, from http://www.aaee.com.au/journal/ 2003/ mills_treagust03.pdf Mioduser, D., Nachmias, R., Oren, A., & Lahav, O. (2000). Web-based learning environments: Current pedagogical and technological state. Journal of Research in Computing in Education, 33(1), 55–76. Moore, M. G., & Kearsley, G. (2005). Distance education: A systems view (2nd ed.). Belmont, CA: Wadsworth Publishing Company. Perrenet, J. C., Bouhuijs, P. A. J., & Smits, J. G. M. M. (2000). The suitability of problem-based learning for engineering education: Theory and practice. Teaching in Higher Education, 5(3), 345–358. doi:10.1080/713699144
Online Project-Based Learning
Pratt, M. G. (2009). From the editors. For the lack of a boilerplate: Tips on writing up (and reviewing) qualitative research. Academy of Management Journal, 52(5), 856–862. Rooij, S. W. (2009). Scaffolding project-based learning with the project management body of knowledge (PMBOK). Computers & Education, 52(1), 210–219. doi:10.1016/j.compedu.2008.07.012 Smart, K., & Cappel, J. (2006). Students’ perceptions of online learning: A comparative study. Journal of Information Technology Education, 5, 201–219. Thomas, J. W. (2000). A review of research on project-based learning. Retrieved June 16, 2009, from http://www.bobpearlman.org/ BestPractices/ PBL_Research2.pdf Thomas, R., & MacGregor, K. (2005). Online project-based learning: How collaborative strategies and problem solving processes impact performance. Journal of Interactive Learning Research, 16(1), 83–107. Tolsby, H., Nyvang, T., & Dirckinck-Holmfeld, L. (2002). A survey of technologies supporting virtual project based learning. In Proceedings of the Third International Conference on Networked Learning 2002 (pp. 572-580), Lancaster University and Sheffield University. Wang, J., Fong, Y. C., & Alwis, W. A. M. (2005). Developing professionalism in engineering students using problem based learning. Proceedings of the 2005 Regional Conference on Engineering Education (pp. 1- 9). Johor, Malaysia. Wang, M., Pool, M., Harris, B., & Wangemann, P. (2001). Promoting online collaborative learning experiences for teenagers. Educational Media International, 38(4), 203–215. doi:10.1080/09523980110105079
Yin, R. (1994). Case study research: Design and methods (2nd ed.). Beverly Hills, CA: Sage Publishing.
ADDITIONAL READING Atkinson, J. (2001). Developing Teams through Project-based Learning. Hampshire, England: Gower. Boss, S., & Krauss, J. (2007). Reinventing projectbased learning: Your field guide to real-world projects in the digital age. Eugene, OR: International Society for Technology in Education. Diehl, W., Grobe, T., Lopez, H., & Cabral, C. (1999). Project-based learning: A strategy for teaching and learning. Boston, MA: Center for Youth Developement and Education, Corporation for Business, Work, and Learning. Grabe, M., & Grabe, C. (2004). Integrating technology for meaningful learning (4th ed.). Boston, MA: Houghton Mifflin Company. Häkkinen, P. (2002). Internet-based learning environments for project-enhanced science learning. Journal of Computer Assisted Learning, 18(2), 232–235. doi:10.1046/j.1365-2729.2002. t01-1-00230.x Hargis, J. (2005). Collaboration, community and project-based learning- Does it still work online? International Journal of Instructional Media, 32(2), 157–161. Hmelo-Silver, C. E. (2004). Problem-based learning: What and how do students learn? Educational Psychology Review, 16(3), 235–266. doi:10.1023/ B:EDPR.0000034022.16470.f3 Lou,Y.(2004).LearningtoSolveComplexProblems through Between-Group Collaboration in ProjectBased Online Courses. Distance Education, 25(1), 49–66. doi:10.1080/0158791042000212459
371
Online Project-Based Learning
Major, C. H., & Palmer, B. (2001). Assessing the effectiveness of problem-based learning in higher education: Lessons from the literature. Academic Exchange Quarterly, 5(1), 4–9.
Scardamalia, M., & Bereiter, C. (2003). Knowledge building. In Encyclopedia of Education. (2nd Ed., p. 1370-1373). New York: Macmillan Reference, USA.
Markham, T. (2003). Project Based Learning Handbook. Buck Institute for Education.
Swan, K., Shea, P., Fredericksen, E., Pickett, A., Pelz, W., & Maher, G. (2000). Building knowledge building communities: Consistency, contact, and communication in the virtual classroom. Educational Computing Research, 23(4), 359–383.
McGrath, D. (2002). Getting started with projectbased learning. Learning and Leading with Technology, 30(3), 42–45. Murphy, K. L., & Gazi, Y. (2001). Role plays, panel discussions and simulations: Projectbased learning in a web-based course. Educational Media International, 38(4), 261–270. doi:10.1080/09523980110105132 Ozdener, N., & Ozcoban, T. (2004). A project based learning model’s effectiveness on computer courses and multiple intelligence theory. Educational Sciences: Theory and Practice, 4(1), 164–170. Reeves, T. C. (2000). Alternative assessment approaches for online learning environments in higher education. Journal of Educational Computing Research, 23(1), 101–111. doi:10.2190/ GYMQ-78FA-WMTX-J06C Rosenfeld, S. & Ben-Hur, Y. (2001). Project-based learning (PBL) in science and technology: A case study of professional development. Science and Technology Education: Preparing future citizens, I & II, 31-37. Savin-Baden, M. (2000). Problem-based Learning in Higher Education: Untold Stories. Buckingham: The Society for Research into Higher Education & Open University Press. Savin-Baden, M. (2007). A Practical Guide to Problem-based Learning Online. New York: Routledge.
372
Synteta, P. (2003). Project-Based e-Learning in higher education: The model and the method, the practice and the portal. Studies in Communication. New Media in Education, 3(3), 263–269. Yukselturk, E., & Cagiltay, K. (2008). Collaborative Work in Online Learning Environments: Critical Issues, Dynamics and Challenges. In Orvis, K. L., & Lassiter, A. L. R. (Eds.), ComputerSupported Collaborative Learning: Best Practices and Principles for Instructors (pp. 114–139). USA: Information Science Publishing.
KEY TERMS AND DEFINITIONS Asynchronous Instruction: It does not require simultaneous participation of all students and instructors; students may arrange their own study time and access or study learning materials according to their schedules. Forms of asynchronous delivery include email, discussion boards and web 2.0 tools. Collaborative Learning: It is an instruction method in which students at various performance levels work together in small groups toward a common goal. Computer-Supported Collaborative Learning: It is a method of supporting collaborative learning using computers and the Internet technologies.
Online Project-Based Learning
Group Work: It is a form of collaborative learning. It aims to provide for individual differences, develop students’ knowledge, communication skills, collaborative skills and attitudes. Online Learning Environment: It is an anywhere and anytime learning environment allowing educators to deliver a course asynchronously, synchronously or a combination of the two. Project-Based Learning: It is an individual or group activity that goes on over a period of
time, resulting in a product, presentation, or performance. Software Development Process: It is a structure imposed on the development of a software product. Synchronous Instruction: It requires simultaneous participation of all students and instructors. Forms of synchronous delivery include two-way video conferences, telephone conversations and chat sessions.
373
Online Project-Based Learning
APPENDIX Open-ended questions in the questionnaire Questions
Q1
Q2
What are your ideas about this project-based course? Please write the advantages or disadvantages of a project-based course compared to other courses.
√
√
Why have you preferred team work in the course? What advantages have you had in teamwork? Are you satisfied with teamwork?
√
Please explain your responsibilities in the project? How have you divided tasks? Please explain other group members’ tasks.
√
How have you corresponded with other group members? Which communication tools have you used more? How have you completed you tasks in the project?
√
Why have you preferred individual work? What advantages have you had in individual work? Are you satisfied with individual work?
√
If you were to take this course again, which one would you choose? Team work or individual work? Please explain.
√
√
What kind of problems have you had in the process of project design and development? How have you solved these problems?
√
√
What are the benefits or difficulties of this online course for you? Could you please exemplify?
√
√
What are basis for completing the course successfully for you? Have you acquired the necessary skills you have expected of?
√
√
What do you think of Information Technologies after taking the course? How do you find you competence in the IT field?
√
√
What do you suggest to the future course takers to be successful and to the instructors to design a more effective course? What should/shouldn’t they do?
√
√
Q1: Questions for students worked on teamwork projects, Q2: Questions for students worked on individual projects
374
375
Chapter 17
Students’ Perceptions, Interaction and Satisfaction in the Interactive Blended Courses: A Case Study Bünyamin Atici Firat University, Turkey Yalın Kılıç Türel Firat University, Turkey
ABSTRACT Blended courses that offer several possibilities to students and teachers such as becoming more interactive and more active have become increasingly widespread for both K12 and higher education settings. With the rising of the cutting edge technologies, institutions and instructors have embarked on creating new learning environments with a variety of new delivery methods. At the same time, designing visually impressive and attractive blended settings for students has been easier with extensive learning and content management systems (LMS, CMS, LCMS) such as Blackboard, WebCT, Moodle and virtual classroom environments (VLE) such as Adobe Connect, Dimdim, and WiZiQ. In this study, the authors aimed to investigate students’ perspectives and satisfactions towards designed interactive blended learning settings and to find out the students’ views on both synchronous and asynchronous interactive blended learning environment (IBLE).
INTRODUCTION There is a new generation youth growing with technology and having different habits, behaviors, and expectations than previous generations. This phenomenon requires both designing ideal instrucDOI: 10.4018/978-1-60960-615-2.ch017
tional settings for learners by taking necessities of this new generation into account and providing institutions become more competitive more challenging and more adaptive (Dziuban, Moskal, & Hartman, 2005). Under recent circumstances, it is inevitable for official and commercial institutions to make use of e-learning applications with instructional activities. On designing learning
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
Students’ Perceptions, Interaction and Satisfaction in the Interactive Blended Courses
environments, Internet enables us to use different tools, to access from anywhere at any time, and to save all kinds of records regarding instruction. As well as widespread usage of Internet, it mainly affects characteristics of learners such as learning, studying, and communicating. Therefore, it is crucial to employ Internet as a supportive element for learning. On the other hand, when the instruction is merely based on Internet, it may suffer from the several disadvantages of online distance learning and it is required to eliminate those problems by combining traditional faceto-face events (Akkoyunlu & Soylu, 2008). BL concept is the reflection of this requirement (Rovai & Jordan, 2004). In order to be able to make mention of BL, it is important to highlight that blended learning is a standalone concept and it is also neither enhanced face-to-face classroom strategy nor fully online learning method (Garrison & Kanuka, 2004). “Blended (Hybrid) Learning” (BL) can be defined simply as a well-balanced combination of online and face-to-face instruction. This concept can be stated to use different structure in an instructional setting. However, the most important thing makes blended learning special and distinguish is its flexible structure which is designed according to existed possibilities, necessities and conditions of the context that will be applied. In this manner, created blended learning system will be definitely unique. Meanwhile synthesizing the strength parts of both face-to-face and online learning has been proved as the most effective approach in several studies (Olapiriyakul & Scher, 2006:222). Elearnspace (2005) defined blended learning as “blended learning takes the best of both worlds and creates an improved learning experience for the student” (n.p.) because BL has diverse benefits such as growing interaction, compatibility, better learning, low costs and low session time, and better retention (Young, 2002). Success of BL is associated with the level of how interactive it was designed. Interaction gives learners an opportunity to meet their needs and
376
competences, to shape their learning experiences, and to be motivated. Students who can control their learning pace find time to process and reflect information as well. Interaction in a learning process emerges as an important mechanism required obtaining information, both cognitive and physical development (Lim, Lee, & Richards, 2006). Blended learners do not learn by only one channel, one method, or only one way. Interaction opens up different ways and channels to learn. Consequently, in both face-to-face and online sections of the BL settings, using interaction properly and well-balanced will be more efficient and effective. Institutions need to change their policies and practices by redesigning their campus structure relying on the learning management tools (Olapiriyakul & Scher, 2006). Therefore, on giving online support to users, Learning Management Systems (LMS) as a ground platform is also critically important. LMSs can be either commercial or open source and they are utilized for organizing all processes of students, teachers, and courses as well as content delivering and communication between users such as teacher-students and students-students. In addition to LMS platform, a VLE (virtual learning environment) or virtual classroom software should be used for synchronous meetings.
THEORETICAL FRAMEWORK OF BLENDED LEARNING With becoming the state of the art technologies widespread, researchers have started to consider on their utilization in education more effectively. These technologies are basically performed in three ways: Integration of information and communication technologies (ICT) to traditional classroom settings as supportive tools, fully online or in another word e-learning settings and combining these both approaches together, blended learning settings (Gülbahar, 2009; Dziuban, Moskal & Hartman, 2005).
Students’ Perceptions, Interaction and Satisfaction in the Interactive Blended Courses
Definition Although “blending” as an educational concept implies the combination of different delivery modes of learning, different technologies or different learning approaches and models, today it is preferred to be used as a “Blending Learning” or “Blending e-Learning” which stands for the integration of online learning and traditional faceto-face learning in an optimum way. There are diverse concepts such as hybrid learning, mixed learning as well as Blended learning (BL). They practically refer to the same meaning in our field. Though one of the most used concepts explaining such mixed mode of instruction is “hybrid learning”, few researchers claim that it has slightly different than Blended Learning (BL) (Hinterberger, Fässler, & Bauer-Messer, 2004, as cited in Olapiriyakul & Scher, 2006; Koohang, Britz, &Seymour, 2006; Young, 2002). Fainholc and Scagnoli (2009) defined the distinction as “Hybrid learning as a combination of on line teaching with face-to-face as an educational proposal without replacing face-to-face meetings and Blended learning as a modification of the class schedule to include online and face-to-face components” (p. 3). According to Osguthorpe and Graham (2003), the main aim of the BL is to establish a congruous balance between online content and face-to-face student interaction and this balance can be varied based on several factors such as the nature of course, needs, and characteristics of target audience, expectation and background of instructor, and online resources and tools. Whitelock and Jeffs (2003) expressed the most mentioned definitions of Blended learning as combinations of (1) conventional instruction with web-based tools, (2) media and tools utilized in e-learning settings, and (3) a variety of pedagogic approaches regardless of instructional technology use. In a similar way, Graham (2006) elucidated the combination process in three separate descriptions based on literature review: First, combining instructional modalities (or delivery
media) second, combining instructional methods (as in Driscoll, 2002), and (3) combining online and third, face-to-face instruction (as in Young, 2002). Unlike the first and second ones, the third one basically reflects the common consensus regarding BL definition (Graham, 2006). BL may contain varied forms of instructional tools such as collaboration software, self-paced e-courses, and knowledge management systems and expose a blending of traditional teacher-centered training, synchronous online conferencing, asynchronous individualized instruction, and workplace training from an expert or guide (Singh, 2003). The highly essential part of this process is to establish equilibrium between two instructional contexts – traditional face-to-face settings and online settings – in the light of the most advantageous parts of both contexts while constructing the BL environments (Akkoyunlu & Soylu, 2008). With respect to mixture ratio, as stated above, it is completely flexible and peculiar to the program which is developed based on the needs and preferences of learners and available resources. To exemplify, program in English language teaching in open education faculty of Anadolu University adopted ‘2+2 model’ namely fifty percent faceto-face for 1st and 2nd years and fifty percent fully online for 3rd and 4th years while supporting online resources and all kinds of facilities of web-based tools during the entire program (4 years) in Turkey (IÖLP, 2010). Moreover, some of the blending courses comprise 50% of web-based applications while others are blended in different portions such as 20% synchronous online meetings providing extensive online contents (Dziuban, Moskal & Harman, 2005).
Necessities for Blended Learning For both online and traditional settings, main components of BL were thoroughly apart from each other till the beginning of 2000s, since their goals and contexts were distinctive (Graham, 2006) and it was a common belief that
377
Students’ Perceptions, Interaction and Satisfaction in the Interactive Blended Courses
each of them had their own superiorities. On the one hand e-learning implementations have been developing and becoming more and more popular as a means of distance learning thanks to their advantages (i.e. time and place flexibilities, supporting personalized learning), on the other hand several problems such as lack of students’ socializations (Akkoyunlu & Soylu, 2008), low level of fidelity as in face to face environments (Graham, 2006) have been arisen. Regarding the students who are less self-regulated and expect direction and motivation from a visible professor feel frustration and alone themselves in fully online courses. Likewise, active students may dominate in classroom discussions and as a result of this, introverted students can not reveal themselves adequately in traditional classroom and they prefer online discussions (Rovai & Jordan, 2004). Besides, educational institutions which adopt e-learning for delivering instruction have disregarded that traditional learning is still being appreciated and trusted method by most of the students and educators (Skelton, 2009). Therefore researchers realized that it is not rational to prefer one method to another and agreed on benefiting from most powerful aspects of both systems while designing new environments known as Blended Learning (Akkoyunlu & Soylu, 2008).
Benefits and Challenges of Blended Learning According to many research, most of the students who had blended courses have positive attitudes towards BL (Rovai & Jordan, 2004). Because BL not only provides all the beneficial parts (i.e. low cost, efficient time, flexible time and place) of e-learning but also supports an interactive face-toface instructional environment that only a visible teacher can provide (Brown, 2003). Therefore a more effective and robust experience, comparing with both traditional and fully online learning, can be obtained with BL (Dziuban, Moskal, & Hartman, 2005). It can be clearly said that there
378
are lots of advantages of BL. Koohang, Britz &Seymour (2006) summarized those advantages from the literature review as convenience, increased interaction, flexibility, higher retention, increased learning, reduced session time, and decreased costs. In terms of time flexibility, it can be prevents courses from dropping out and absenteeism problems to a large extent (Dziuban, Moskal & Hartman, 2005). As the duration of traditional courses decreases, it becomes more possible to utilize the institutional resources and infrastructure effectively and, as a result of this, to cut down general costs. To exemplify, Dziuban and Moskal (2001) study reveals that, due to the advantages of BL, only one hour utilization of real-classroom can be sufficient for three-hourcourses in traditional face-to-face instruction (as cited in Rovai and Jordan, 2004). Besides, this kind of instruction gives opportunity for students to set up communications and interactions in face-toface environment and to maintain those in online mode (Olapiriyakul & Scher, 2006). If blended settings are well-designed, it is also possible to achieve these benefits: Pedagogical richness, access to knowledge, social interaction, personal agency, cost effectiveness, and ease of revision (Osguthorpe & Graham, 2003). Particularly, those pedagogical richness, access to knowledge, and cost effectiveness are the most essential features for BL (Graham, 2003). Since teachers prefer teacher-centered approaches in their classroom predominantly, students need such solutions provide active, individualized, collaborative learning and problem solving as a part of learner-centered strategies. Students can access anytime from anywhere all kinds of resources published on the web of the course. As for cost effectiveness, there are several BL projects showing the low costs for large groups of students (Graham, 2003). Fainholc and Scagnoli (2009) assert that BL in fact does not decrease cost however it enhances the quality of instruction with progressive strategies. Vaughan (2007) reviewed the benefits into three perspectives in terms of students, faculty,
Students’ Perceptions, Interaction and Satisfaction in the Interactive Blended Courses
and administration. From the perspectives of students, BL offers students a comfortable interaction with their classmates, teachers and course content both inside and outside of the classroom. He suggests that in BL settings, students mainly benefit from time flexibility and improved learning outcomes. For teachers, BL provides enhanced teacher-student interaction, increased students’ engagements, and flexible environment. From the administrative perspective, BL improves the reputations of institutions and gives opportunity to them use their resources more effectively while reducing operational costs (Vaughan, 2007). Like any instructional approach, BL has varied challenges as well. First of all, designing a BL system requires many innovations regarding both in policy and in practice. Principles should support such new campus structure and eliminate barriers to initiate the system. Technical support is highly important for both teachers and students while performing the system. Infrastructure for establishing web-based learning system including course and learning management systems, web conferencing tools, servers, databases, e-books, and video lectures should be well-determined taking needs and sources into consideration. Instructors as both subject matter experts and teachers have a special role in implementations and they need to take part in designing online contents and instruction such as selecting appropriate teaching and learning strategies, planning instructional activities and communication (Olapiriyakul & Scher, 2006). In particular, for the first establishing process of BL may be arduous and it takes times to design all parts of instruction as well as achieving new technology and teaching skills, passing through the organizational changes and bureaucratic barriers (Vaughan, 2007). Considering the previous research, Graham (2006) categorized the issues of BL into six headings: (1) the role of synchronous interaction, (2) the role of learner selections and self-regulation, (3) models for support and training, (4) finding equilibrium between innovation and production, (5) cultural adaptation, and (6)
dealing with the digital divide. Especially adaptation for both teachers, administrators, and students and balance between the instructional and technical factors are crucial for reaching a successful system. In conclusion, as stated above, there are various advantages and disadvantages of BL programs. In order to have a better understanding of a wellbalanced and effective BL environment, it is necessary to assess all of the problems and advantages of experienced systems. The flexibility of BL, although considered as a critical advantage, may be regarded as a barrier for standardization of such systems. Thus, the following section discusses the different BL models in order to give an opportunity for researchers to determine the ideal BL models.
Blended Learning Models There is not exactly one blended learning model that suits every kind of instructional settings. Therefore researchers have developed a number of models for different settings. One of the most well-known categorizations concerning BL Models was declared in Valiathan’s (2002) study. She accounted the BL models as skill-driven learning, attitude-driven learning, and competency-driven learning. Skill-driven learning model aims to facilitate particular knowledge and skills of students while attitude-driven learning model mixes a variety of events and media in order to develop definite behaviors. Finally competency-driven learning model combines performance support systems with knowledge management sources for improving workplace learning (Valiathan, 2002). Based on Maslow’s and Vygotsky’s theories, Chew, Jones and Turner (2008) reviewed Salmos’s e-Moderation and e-Tivities Model arising from Open University of UK, Learning Ecology Model by Sun Microsoft System, Blended Learning Continuum developed by University of Glamorgan, and Inquiry-based Framework by Garrison and Vaughan. For instance, the Continuum Model has basically four stages of Blended Learning. The
379
Students’ Perceptions, Interaction and Satisfaction in the Interactive Blended Courses
first stage is the basic ICT usage stage referring involvement of Powerpoint, Excel, and no online technology and being regarded as a traditional stage. The second stage is the e-enhanced stage covering learning management systems for performance and communication and 1-29% online support of whole course. The third one is e-focused stage namely ideal BL stage containing interactive materials, discussion boards, online assessments, which means 30-79% online activities of whole instruction. The last one is called e-intensive stage mainly containing online courses with less face-toface time for introductions, briefings, and similar online contexts (Chew, Jones & Turner, 2008). Different necessities require different contexts and models for Blended learning. Martyn (2003) suggested a convenience and a functional BL model called ‘Hybrid Online Model’ (Figure 1) for many settings (Rovai & Jordan, 2004). This model contains face-to-face sessions for the opening and closure of the course. Besides, there are communication facilities including chat sessions, e-mail opportunities, and online threaded discussions between both students-instructor and students-students (Martyn, 2003).
Figure 1. The hybrid online model © 2002, Margie Martyn. Used with permission
380
Blended Learning Tools In order to design appropriate Blended learning environments, it should be decided on which elearning tools are going to be utilized. Regarding the several factors including necessities of the program, general type of course content, and need for communication activities, a variety of online tools and applications (LMS, CMS, web-conferencing, survey, collaboration, assessment tools, etc.) can be selected. For delivering instruction online, the most important tool is ‘Learning Management System’ (LMS). Ellis (2009:1) defines LMS as “a software application for the administration, documentation, tracking, and reporting of training programs, classroom and online events, e-learning programs, and training content”. LMS mainly supports administrative tools in order to run the e-learning system effectively however it is not principally interested in delivering media and content. LMS support s either content creation tools or ready-made solutions for this. Specifically for that part of e-learning systems, CMS (Content Management System) is driven to create required course contents and manage it (Türel & Gürol, 2005). It is required to explain supportive tools including CMS and web-conferencing tools before the relations of BL and LMS. There are a number of CMSs designed and developed by instructional designers and programmers. Joomla, Mambo, Drupal, eFront, Php-Nuke, XOOPS are samples of different CMSs and they can be used as a supportive tool for Blended Learning in presenting instructional activities, organizing assignments as well as delivering course content (Altun, Gülbahar, & Madran, 2008). Although CMSs are important sources for online learning, learning managements systems (LMSs) have already features that CMSs offer. That’s why LMS’ usage is more widespread than CMS’. Most of the theorists including Dewey, Piaget, Gagne, and Vygotsky have taken an attention to construction of knowledge and active learning where interaction between both learner-learner
Students’ Perceptions, Interaction and Satisfaction in the Interactive Blended Courses
and learner-instructor is quite high (Campo & DeVrieze, 2008). To design interactive blended settings from the constructivist perspective, another essential structure for establishing synchronous online learning settings in BL model is called Webconferencing tools or, in another words, online meeting tools. Web conferencing tools (WCT) allow learners and teachers to communicate, collaborate, create and share resources and give immediate feedback over the web simultaneously (Campo and DeVrieze, 2008). WCTs can be both commercial and open source/free accessible. Elluminate, Dimdim, WebEx, and WiZiQ are some of the well-known WCTs and most of them support sharing of various elements including audio, video, screen, whiteboard and any file stored on computer. LMS (Learning Management System) which is sometimes called ‘course management system (CMS)’ or ‘virtual learning environment (VLE)’ has a vital role to set up independent online settings and to support traditional learning with online platform where students can learn and interact with each other and their instructors besides their physical classrooms. LMSs are web based software that control and implement various elearning applications such as using standardized virtual libraries, records, reports, communication tools (chat, discussion, forum, online-messaging), evaluation, students’ processing and courses’ processing, designing and deployment of content, and assessment. Blend-XL (2006) project team clarified the functions of LMSs in the progression report of their project in those categories: Authoring tools (course content and tests), administration of courses (helping teachers for managing their courses), user administration (tracking systems for all users), roles and rights management, communications’ functions (synchronous and asynchronous communication tools), cooperation features (whiteboard, calendars, etc.), and personalization (for supporting individual working). There are a great number of LMSs (i.e. Blackboard, Atutor, Moodle) using BL implementa-
tions. Moodle is the most pervasive LMS due to its prominent features. To exemplify, Moodle is an open source LMS being developed based on both needs and emergent problems by a number of programmer and designer from all over the world. Moodle official website (www.moodle.org) provides extensive resources including, forums, online assignments, e-journals, resources, howto-videos, and plug-ins. In this study, Moodle was performed to create interactive, collaborative and constructivist environment. Moodle is a versatile LMS developed for creating internet based courses and managing them with involved users. It gives support for over 70 languages and can be designed for special conditions with its plug-ins and extensions. It is considerably compatible for using in BL environment (Moodle, 2009). Consequently, the learning management tools or other specialized software for BL, are fundamental and need to be well organized. Additionally, others technological contents including e-books, audio and video lectures, podcasts, databases of research literatures, and other multimedia which facilitate students’ learning are required to be designed. Those elements are an essential diversity of BL that are varying from a traditional course (Olapiriyakul & Scher, 2006).
Perceptions of Learners in Blending Settings Particularly for students, blended learning provides a number of opportunities and advantages such as interacting with peers and with teacher any time both in and out the classroom, accessing all learning materials of the course etc. Owing to the different aspects, those new environments affect on students’ perceptions, students’ satisfaction, and students’ interaction some extent differing from both traditional and fully online setting. Besides, BL is one of the most popular and challenging delivery methods for instruction and it is extremely important to evaluate satisfaction degree of learners in such settings. However there
381
Students’ Perceptions, Interaction and Satisfaction in the Interactive Blended Courses
are a few researches about students’ satisfaction towards BL settings. Students’ BL adoption is a crucial indicator for evaluation of success and effectiveness of BL and learner’s satisfaction has an important role in determination of BL adoption (Wu, Tennyson, & Hsia, 2010). Building a meaningful interactive setting is one of pre-requisites of increasing of learners’ satisfaction level. As Moore (1989) suggested, interaction has three types: (a) learner-content interaction, (b) learner-instructor interaction, and (c) learner-learner interaction. With respect to the transactional distance theory, psychological perceptions of students are impressed by interaction while being affected by collaboration and social presence (So & Brush, 2008). Based on analyzing of 24 articles, it is stated that there is no significant difference at students’ satisfaction between online and traditional settings (Allen, Bourhis, Burrell, & Mabry, 2002 as cited in So & Brush, 2008). This finding is important since it means online education as much satisfactory as traditional learning. Rovai and Jordan (2004) conducted a pioneer study in this area, and compared three different modalities including traditional, fully online, and blended settings. They found that learners who were thought in BL settings had higher scores of sense of community than ones both in traditional and in fully online settings. This finding also reveals that students neither lose their social communication nor need more face-to-face interaction with blended learning. These both factors, face-to-face interaction and a sense of community, were stated as most crucial elements of BL in several researches by students (Conole, de Laat, Dillon, and Darby, 2008 as cited in Holley & Oliver, 2010). In a similar way, Garrison and Kanuka (2004) take attention to three elements of communities of inquiry like cognitive, social, and teaching presence. Garrison and Kanuka (2004) explain the importance of sense of community based on these elements that:
382
“The sense of community and belonging must be on a cognitive and social level if the goal of achieving higher levels of learning is to be sustained. This requires the consideration of the different cognitive and social characteristics of each medium of communication.” (pp. 97-98). In order to maintain and to sustain a sense of belonging to the instructional community, attentions of students should be taken meaningfully and lecture should be both rich and pertinent to the topic. Besides, while providing cognitive presence, it is also vital to sustain social presence (Garrison & Vaughan, 2008). Arbaugh (2007) suggested that teaching presence to be a robust indicator of discerned learning and satisfaction depend on the delivery medium. The connections among the variables such as teaching presence, sense of community, satisfaction, and achievement is highly essential for evaluating the overall perceptions and effectiveness of the BL settings (Garrison & Vaughan, 2008). Akkoyunlu and Soylu (2008) who reviewed the perceptions of students in BL settings based on their learning styles expressed the importance of the face-to-face interaction for BL and students’ preferences validate this statement. Additionally, in the study of Holley and Oliver (2010), students expressed their satisfactions with both online assessment tools of VLEs (Virtual Learning Environments) and freely accessible course materials on the web. On the other hand, Dziuban, Moskal & Hartman (2005) assert that two major elements on which both students and teachers agree regarding satisfaction with BL are learning engagement and effective communication. Koohang and Durante (2003) take attention to the positive effects of Internet usage skills on students’ perceptions towards BL. Besides, they also suggest that age and gender were not significant variables for the success of BL.
Students’ Perceptions, Interaction and Satisfaction in the Interactive Blended Courses
Aim of the Study The main aim of this study is to determine student’s satisfaction in blended learning environment, to gather the students’ perception towards an interactive blended learning, and to explore the barriers and disadvantages of such settings.
Limitations The sample for this study comprised full-time students in the Faculty of Education at Firat University. Only one institution is sampled and recorded data and opinions of students may not be fully representative across other faculties and institutions.
RESEARCH METHODOLOGY In this study, we aimed to examine the factors affecting satisfactions of students regarding interactive blended learning environments (IBLE) by utilizing both qualitative and quantitative approaches together. From this point, it has been taken on interactivity, sense of classroom community, social presence, and performance in relation to satisfaction and performance in IBLE. SPPS for Windows software was performed on analyzing of quantitative data while MaxQda was exploited for qualitative data analysis. In this study, two settings were designed: (1) traditional (face-to-face) instructional settings and (2) interactive blended learning settings. The face-to-face settings were fulfilled 4 hours in a week for three weeks (totally 12 hours) and comprised of course textbook, study notes, in-class discussions, group and individual works. On the other hand, during the same course period, the interactive blended learning settings involved online synchronous and asynchronous tools and environments as well as F2F context. Moodle as a learning management system platform for asynchronous applications and Dimdim as an online meeting tool for synchronous needs were
used. Syllabus, weekly discussion topics, course presentations, reading sources, and links are some of the main resources of designed online platform. In total, 110 undergraduate students who took “Instructional Design” courses in the department of Computer Education and Instructional Technology participated into the study. Based on their setting preferences, students were grouped into either F2F or blended learning settings (IBLE). While 50.9% (N=56) of the students were taught in IBLE and 49.1% (N=54) of the students were taught in F2F settings. While 65 students (59.1%) were male, 45 students (40.1%) were female. In this study, the possible effects of independent variables including interaction, social presence, performance, sense of classroom community, and collaboration on dependent variables including satisfaction were examined and analyzed. Two research instruments were performed for this study. The Blended Learning Questionnaire, which has 30 items, was designed by researchers by being adapted from Picciano’s (2002) student questionnaire whose questions were compiled based on the Inventory of Presence Questionnaire of Presence Research Working Group and on a questionnaire developed by Tu (2001). In addition, Rovai’s (2002) “Classroom Community Scale (CCS)” was conducted for examining the sense of classroom community (Table 1). Each of the variables were reclassified and transformed in terms of their percentile ranks as high, average, and low and presented in Table 2. All transcripts of qualitative data were coded based on these percentile ranks. Independent sample t-test, one-way ANOVA test, and Kruskal Wallis H-test were utilized to analyze the data.
FINDINGS AND DISCUSSIONS Independent sample t test was used for examining whether there was any differences with regard to interaction, satisfaction, and sense of classroom
383
Students’ Perceptions, Interaction and Satisfaction in the Interactive Blended Courses
Table 1. Blended learning questionnaire and sample items Subscales
Sample Items
Interaction (5 items)
The quantity of interaction with other learners (Item 1) The quality of interaction with the teacher (Item 3)
Collaboration (7 items)
I share information with my classmates. (Item 7) I am involved in group activities as part of my studies. (Item 8)
Social presence and performance (11 items)
The interactive blended course stimulated my desire to learn (Item 14) I felt I got to learn a great deal about the other students in the interactive blended course (Item 16)
Satisfaction (7 items)
I prefer interactive blended learning (Item 25) Interactive blended learning is worth my time (Item 27)
Table 2. Classifications of students respecting variables Interaction (N=56)
Collaboration (N=56)
Social presence and performance (N=56)
Sense of classroom community (N=56)
Satisfaction (N=56)
High
19
18
17
21
20
Average
16
18
17
15
18
Low
21
20
22
20
18
community (SoCC) between the students (N=56) taught in IBLE and the students (N=54) in traditional learning settings (F2F) (Table 3.). As seen from Table 3, significant differences between two groups (IBLE and F2F groups) in terms of interaction, satisfaction, and SoCC variables were found. Moreover, the IBLE group (N=56) was classified as high, average, and low levels in terms of interaction, social presence and performance, SoCC, and satisfaction by conducting percentile ranks. After this classification, one way ANOVA was administered for each level of groups (Table 4). As a result of the analysis, significant differences were observed between every three levels of all variables. In order to find if there were any significant differences between groups, the LSD test was administered. According to the results of the LSD test, there were significant differences among three ranks (high, average, and low) of each variable including interaction, social presence and performance, SoCC, and satisfaction of students. Since the distribution of collaboration
384
was not homogeneous, Kruskal Wallis H test and Tukey’s test were used for this variable (Table 5). Likewise, there happened significant differences between all levels of collaboration. As part of qualitative analysis, students’ discussions accrued in both synchronous and asynchronous settings were transcribed and coded by means of MaxQda, which is professional software for analysis of qualitative data (Table 6). In the table, facilitative statements (FS) stand for creating a discussion and sustaining it, encouraging peers to speak, and using positive statements while debilitative statements (DS) were used for comments which either interrupt dialogs or affect dialogs in a negative way, or prevent peers from explaining their views. In addition, individual comments (IC) cover all reflections, self-opinions, personal ideas, and self-knowledge of students towards situations or cases. As seen from Table 6, of total 22054 words, 16376 words from asynchronous settings and 5678 words from synchronous settings were generated by students. When the number of words
Students’ Perceptions, Interaction and Satisfaction in the Interactive Blended Courses
Table 3. The results of the t-tests for iInteraction, satisfaction, and SoCC Variables Interaction Satisfaction Sense of classroom community (SoCC)
Settings
N
X̅
S
df
t
Sig.
IBLE
56
17.39
1.42
108
9.58*
.000
F2F
54
14.25
1.97
IBLE
56
24.01
3.45
108
5.07*
.000
F2F
54
19.88
4.96
IBLE
56
74.67
6.19
108
5.30*
.000
F2F
54
68.37
6.28
*p<.05 Significant
Table 4. One way ANOVA results in respect to variables Levene Statistics Interaction
Collaboration
p=.086
p=.044*
Social presence and performance
p=.456
Sense of classroom community
p=.089
Satisfaction
p=.147
Between Groups
Sum of Squares
df
Mean Square
F
Sig.
167.187
2
83.593
98.394*
.000
.850 305.774*
.000
159.925*
.000
390.518*
.000
293.207*
.000
Within Groups
45.028
53
Total
212.214
55
Between Groups
567.332
2
283.666 .928
Within Groups
49.168
53
Total
616.500
55
Between Groups
666.413
2
333.206
Within Groups
110.427
53
2.084
Total
776.839
55
Between Groups
1976.118
2
988.059
Within Groups
134.097
53
2.530
Total
2110.214
55
Between Groups
602.526
2
301.263
Within Groups
54.456
53
1.027
Total
656.982
55
*p<.05 Significant
Table 5. Differences at collaboration levels Kruskal Wallis H Collaboration
N
Mean
High (1)
19
47.00
Average (2)
16
29.50
Low (3)
21
11
df
χ2
Tukey
p
1-2 2
49.380
.000
1-3 2-3
385
Students’ Perceptions, Interaction and Satisfaction in the Interactive Blended Courses
Table 6. Synchronous and asynchronous transcription data Number of words (NW)
Asynch
Synch
Total
16376
5678
22054
Number of questions (NQ)
186
58
244
Number of words received in response (NWR)
6973
985
7958
Numbers of contributions addressed to the whole group (NC)
298
64
362
Numbers of contributions with references to others responses (NCR)
193
18
211
Uncertain statements (perhaps, maybe) (US)
243
138
381
Certain statements (It is, I think) (CS)
672
36
708
*Facilitative Statements (FS) creating and maintaining an open discussion.
568
82
650
*Debilitative Statements (DS) cutting off or inhibiting the interaction or the development of an idea
432
238
670
Individual Comments (IC)
830
157
987
*Adapted from Beckwith, 1987
Table 7. Count of words of transcription categories with respect to interaction levels Interaction Level High (N=19)
Average (N=16)
Low (N=21)
Total (N=56)
Number of words (NW)
9232
7324
5498
22054
Number of questions (NQ)
102
89
53
244
Number of words received in response (NWR)
3890
2754
1314
7958
Numbers of contributions addressed to the whole group (NC)
193
118
51
362
Numbers of contributions with references to others responses (NCR)
156
42
13
211
Uncertain statements (US)
52
104
225
381
Certain statements (CS)
396
252
60
708
Facilitative Statements (FS)
387
208
55
650
Debilitative Statements (DS)
186
202
282
670
Individual Comments (IC)
456
321
210
987
in synchronous settings was higher than asynchronous settings, it seems that uncertain statements (US) and debilitative statements (DS) in synchronous modes were more than ones in asynchronous modes in proportion to total number of words. Percentile ranks of corrected forms of transcribed data taken from synchronous and asynchronous settings presented in Table 7. Important considerations can be stated based on analysis of quantitative data and interviews with students as below:
386
•
Since students did “copy-paste” from electronic sources, number of words in asynchronous settings was higher than ones in synchronous. This copy-paste trick was used mostly by students who are in low level of interaction, social presence and performance. The fact that social presence is a crucial factor for particularly online students was emphasized in several researches (Gunawardena & Zittle, 1997; Rourke, Anderson, Garrison, & Archer, 2001; Tu & McIsaac, 2002; Richardson
Students’ Perceptions, Interaction and Satisfaction in the Interactive Blended Courses
•
• •
• •
•
& Swan, 2003; Garrison & Kanuka, 2004; Garrison & Vaughan, 2008). Though few studies investigated the relation between satisfaction and social presence, their findings demonstrate that students’ perceptions are moderately related with social presence and satisfaction (So & Brush, 2008; Garrison & Vaughan, 2008). We also found this important relationship between those variables. It can be said that students whose senses of social presence and performance were low may not take part in learning environments sufficiently and this cause low satisfaction level for students. Students preferred certain statements in asynchronous settings more. This may be explained with that students can manage their time effectively and write based on their research. Students whose satisfaction levels were high preferred using more certain statements than the ones who were in low satisfaction level. Therefore, it is clear that high satisfied group was willing to join in learning platform. Contrarily number of uncertain statements was higher in synchronous settings. It is apparent that debilitative statements (DS) reduced the participations and interactions of students in synchronous settings. Debilitative statements interrupting robust communication between people in IBLE influenced sense of classroom community, social presence and performance, collaborations, satisfaction, and most importantly interactions of students in a negative way. Students who presented high level of interaction used more certain statements (CS). More debilitative statements (DS) were preferred by students presented low level of collaboration. Likewise, uncertain statements (US) were put forth by students who had low level of social presence and performance.
•
•
Students who had high level of sense of classroom community made more interpretation individually (IC) and those students also asked more questions with respect to other students. Finally students whose satisfaction levels were high contributed more to peers’ learning in general. At the same time, findings also indicated that these students were the ones who had strong sense of classroom community (SoCC) as it was supported by Rovai and Jordan’s (2004) study. Consequently, it can be stated that developing a positive SoCC had worthy influence on determining satisfaction levels of students.
CONCLUSION AND IMPLICATIONS Blended Learning is not a new concept but its implementations are continuously being improved by benefiting from strong sides of both traditional face-to-face and online learning. By featuring interaction, which is one of the most important factors for effective learning, when Blended learning settings as we called interactive blended learning environments (IBLE) are designed, it is possible to enhance a number of components including satisfaction, sense of classroom community, collaboration, social presence and performance. This study shows that those components are tightly connected with each other and it is possible to elevate them with fastidious design by taking attention to appropriate learning principles, current resources and needs of target students. In the near future, online instructional tools allow us be more flexible in design process and prospectively online part of the all program will gradually increase. Several factors such as personal expectations and demand of both students and instructors may be taken into consideration for future research. If a similar methodology of this research were
387
Students’ Perceptions, Interaction and Satisfaction in the Interactive Blended Courses
conducted on different contents and contexts such as different universities, primary and secondary schools, it might give an opportunity to bring out interesting points and connections between performed variables. Besides, our research reflects a specific face of a BL implementation peculiar to the mentioned Turkish university, a certain subject matter and target group, employed research instruments for a limited time and assumes that results can be generalized for other studies. Examination of blended learning with respect to every one of these limitations may be a separate research topic for researchers. Based on findings of this research as well as similar studies, it might be useful to work on developing new models and approaches for designing more effective blended learning environments.
REFERENCES Akkoyunlu, B., & Soylu, M. Y. (2008). A study of student’s perceptions in a blended learning environment based on different learning styles. Journal of Educational Technology & Society, 11(1), 183–193. Altun, A., Gülbahar, Y., & Madran, O. (2008). Use of a content management system for blended learning: Perceptions of pre-service teachers. Turkish Online Journal of Distance EducationTOJDE, 9(4). Arbaugh, J. B. (2008). Does the community of inquiry framework predict outcomes in online MBA courses? International Review of Research in Open and Distance Learning, 9(2). Beckwith, D. (1987). Group problem-solving via computer conferencing: The realizable potential. Canadian Journal of Educational Communication, 16(2), 89–106.
388
Blend-XL. (2006). Progress report. (225552-CP1-2005-1-NL-MINERVA-MPP). Retrieved Jan 5, 2010, from http://www.blend-xl.eu/files/publications/blend-xl%20progress%20report.pdf Bonk, C. J., Kim, K. J., & Zeng, T. (2006). Future directions of blended learning in higher education and workplace learning settings. In Bonk, C. J., & Graham, C. R. (Eds.), Handbook of blended learning: Global perspectives, local designs (pp. 550–568). San Francisco, CA: Pfeiffer. Brown, R. (2003). Blending learning: Rich experiences from a rich picture. Training and Development in Australia, 30(3), 14–17. Campo, M. A., & De Vrieze, L. (2008). Instructional design model: Blending digital technology in online learning. In J. Luca & E. Weippl (Eds.), Proceedings of World Conference on Educational Multimedia, Hypermedia and Telecommunications 2008 (pp. 2324-2329). Chesapeake, VA: AACE. Chew, E., Jones, N., & Turner, D. (2008). Critical review of the blended learning models based on Maslow’s and Vygotsky’s educational theory. In J. Fong, et al. (Eds.), Hybrid learning and education, (LNCS 5169, pp. 40-53). Dziuban, C. D., Moskal, P. D., & Hartman, J. (2005). Higher education, blended learning, and the generations: Knowledge is power: No more. In Bourne, J., & Moore, J. C. (Eds.), Elements of quality online education: Engaging communities. Needham, MA: Sloan Center for Online Education. ELEARNSPACE. (2005). Blended. Retrieved January 5, 2010, from http://www.elearnspace. org/doing/blended.htm Ellis, R. K. (2009). A field guide to learning management systems. ASTD Learning Circuits. Retrieved Mar 5, 2009, from http://www.astd. org/NR/rdonlyres/12ECDB99-3B91-403E-9B157E597444645D/23395/LMS_fieldguide_20091. pdf
Students’ Perceptions, Interaction and Satisfaction in the Interactive Blended Courses
Fainholc, B., & Scagnoli, N. (2009). Blended learning through interuniversity collaboration interaction. Paper presented at 23rd ICDE World Conference on Open Learning and Distance Education. Retrieved February 4, 2010, from http:// www.ou.nl/Docs/Campagnes/ICDE2009/Papers/ Final_Paper_134Fainholc.pdf Garrison, D. R., & Kanuka, H. (2004). Blended learning: Uncovering its transformative potential in higher education. The Internet and Higher Education, 7, 95–105. doi:10.1016/j. iheduc.2004.02.001 Garrison, D. R., & Vaughan, N. D. (2008). Blended learning in higher education: Framework, principles, and guidelines. San Francisco, CA: Jossey-Bass. Graham, C. R. (2006). Blended learning system: Definition, current trends, future directions. In Bonk, C. J., & Graham, C. R. (Eds.), Handbook of blended learning. San Francisco, CA: Pfeiffer. Gülbahar, Y. (2009). E-Öğrenme. Ankara: PEGEM. Gunawardena, C. N., & Zittle, F. J. (1997). Social presence as a predictor of satisfaction within a computer mediated conferencing environment. American Journal of Distance Education, 11(3), 8–26. doi:10.1080/08923649709526970
Koohang, A., & Durante, A. (2003). Learners’ perceptions toward the Web-based distance learning activities/assignments portion of an undergraduate hybrid instructional model. Journal of Information Technology Education, 2, 105–113. Lim, C. P., Lee, S. L., & Richards, C. (2006). Developing interactive learning objects for a computing mathematics module. International Journal on E-Learning, 5(2), 221–244. Martyn, M. (2003). The hybrid online model: Good practice. EDUCAUSE Quarterly, 26(1), 18–23. Moodle. (2009). Information. Retrieved from http://www.moodle.org Moore, M. G. (1989) Editorial: Three types of interaction. American Journal of Distance Education, 3(2), 1–7. Retrieved January 8, 2010, from http://aris.teluq.uquebec.ca/Portals/598/ t3_moore1989.pdf Olapiriyakul, K., & Scher, J. M. (2006). A guide to establishing hybrid learning courses: Employing information technology to create a new learning experience, and a case study. The Internet and Higher Education, 9, 287–301. doi:10.1016/j. iheduc.2006.08.001
Holley, D., & Oliver, M. (2010). Student engagement and blended learning: Portraits of risk. Computers & Education, 54(3), 693–700. doi:10.1016/j.compedu.2009.08.035
Oliver, R. (2005). Using a blended learning approach to support problem-based learning with first year students in large undergraduate classes. Frontiers in Artificial Intelligence and Applications, 133, 848-851. Retrieved January 8, 2010, from http://elrond.scam.ecu.edu.au/oliver/2005/ pbl.pdf
IÖLP. (2010). Anadolu University, Open Education Faculty, Program of English Language Teaching. Retrieved January 10, 2010, from http:// iolp.anadolu.edu.tr/genel.htm
Osguthorpe, R. T., & Graham, C. R. (2003). Blended learning environments, definitions and directions. The Quarterly Review of Distance Education, 4(3), 227–233.
Koohang, A., Britz, J., & Seymour, T. (2006). Hybrid/blended learning: Advantages, challenges, design, and future directions. Retrieved July 20, 2009, from http://proceedings.informingscience. org/InSITE2006/ProcKooh121.pdf
Picciano, A. G. (2002). Beyond student perceptions: Issues of interaction, presence, and performance in an online course. Journal of Asynchronous Learning Networks, 6(1), 21–40.
389
Students’ Perceptions, Interaction and Satisfaction in the Interactive Blended Courses
Richardson, I., & Swan, K. (2003). Examining social presence in online courses in relation to students’ perceived learning and satisfaction. Journal of Asynchronous Learning, 7(1), 69–88.
Türel, Y. K., & Gürol, M. (2005). A new approach for e-learning: Rapid e-learning. Proceeding of 5th International Educational Technology Conference, Sakarya, Turkey.
Rourke, L., Anderson, T., Garrison, D. R., & Archer, W. (2001). Assessing social presence in asynchronous, text -based computer conferencing. Journal of Distance Education, 14(3), 51–70.
Twigg, C. (2003). Improving learning and reducing cost: New models for online learning. EDUCAUSE Review, 38, 28–38.
Rovai, A. P. (2002). Building sense of community at a distance. International Review of Research in Open and Distance Learning, 3(1). Retrieved November 25, 2009, from http://www.irrodl.org/ content/v3.1/rovai.html Rovai, A. P., & Jordan, H. M. (2004). Blended learning and sense of community: A comparative analysis with traditional and fully online graduate courses. International Review of Research in Open and Distance Learning, 5. Singh, H. (2003). Building effective blended learning programmes. Educational Technology, 43, 51–54. Skelton, D. (2009). Blended learning environments: Students report their preferences. In Proceedings of the Twenty Second Annual Conference of the National Advisory Committee on Computing Qualifications. Retrieved January 10, 2010, from http://hyperdisc.unitec.ac.nz/naccq09/ proceedings/pdfs/105-114.pdf Tu, C. H. (2001). How Chinese perceive social presence: An examination of interaction in an online learning environment. Educational Media International, 38(1), 45–60. doi:10.1080/09523980010021235 Tu, C. H., & McIsaac, M. (2002). The relationship of social presence and interaction in online classes. American Journal of Distance Education, 16(3), 131–150. doi:10.1207/S15389286AJDE1603_2
390
Vaughan, N. (2007). Perspectives on blended learning in higher education. International Journal on E-Learning, 6(1), 81-94. Chesapeake, VA: AACE. Retrieved January 15, 2010, from http:// www.editlib.org/p/6310 Whitelock, D., & Jeffs, A. (2003). [Editorial]. Journal of Educational Media, 28(2-3), 99–100. Wu, J.-H., Tennyson, R. D., & Hsia, T.-L. (2010). A study of student satisfaction in a blended e-learning system environment. Computers & Education.. doi:10.1016/j.compedu.2009.12.012 Young, J. R. (2002). Hybrid teaching seeks to end the divide between traditional and online instruction. The Chronicle of Higher Education, 48(28), A33–A34.
KEY TERMS AND DEFINITIONS Blended Learning: It refers mixing traditional and online learning relying on maximizing of and benefiting from the effectiveness of both ways of instruction. Collaboration: This concept stands for two or more people’ or organizations’ cooperation on a specific task. Face to Face (F2F) Instruction: It is such an instructional way being performed in a traditional physical environment. In face-to-face instruction, both teacher and students are in the same place at the same time. Interaction: From the pedagogical perspective, interaction can be defined as students’ communications with each other, their teachers,
Students’ Perceptions, Interaction and Satisfaction in the Interactive Blended Courses
course content, and learning environment. It also represents multi-modality of communication instead of one way transition of information. Learning Management System (LMS): This is a web portal system working based on databases, managing and delivering instruction with all elements including students, teachers, administrators, courses, and grades. Moodle: Moodle (Modular Object Oriented Dynamic Learning Environment) is an open source, PHP-MySQL based, learning management system (LMS). It is known one of the most common used LMS all over the world and has highly flexible structure that enables instructors and developers to extend its capacity by adding extra plug-ins and/or coding interventions.
Online Learning: Defines learning delivered via Internet and web-based technologies and it is accepted one of the most popular ways of distance learning. Sense of Classroom Community: An emotional connection and sense including personal sense of belonging to a classroom, attaching value to other classmates developed by students, who are in the same classroom. Social Presence: Personal awareness level of any students towards other people in the same environment, virtual and/or real, and sense of being together with other people.
391
392
Compilation of References
AAHE. (1994). CQI 101: A first reader for higher education. Washington, DC: American Association for Higher Education. About, I. H. E. P. (2010). Retrieved March 10, 2010, from http://www.ihep.org/ About/ about-IHEP.cfm Ackerman, P. L., Bowen, K. R., Beier, M. E., & Kanfer, R. (2001). Determinants of individual differences and gender differences in knowledge. Journal of Educational Psychology, 93, 797–825. doi:10.1037/0022-0663.93.4.797 Ackerman, P. L., & Heggestad, E. D. (1997). Intelligence, personality, and interests: Evidence for overlapping traits. Psychological Bulletin, 121, 219–245. doi:10.1037/00332909.121.2.219 Ackermann, R. J. (1985). Data, instruments and theory: A dialectical approach to understanding science. Princeton, NJ: Princeton University Press. ADA. (1994). Retrieved December 22, 2009, from http:// www.ada.gov/ adastd94.pdf Adams, D. A., Nelson, R. R., & Todd, P. A. (1992). Perceived usefulness, ease of use, and usage of information technology: A replication. Management Information Systems Quarterly, 16(2), 227–247. doi:10.2307/249577 ADEC. (2003). American Distance Education Consortium guiding principles for distance teaching and learning. Retrieved October 26, 2009, from http://www.adec.edu/ admin/ papers/ distance-teaching_principles.html Advanced Distributed Learning (ADL). (2009). Retrieved on December 29, 2009, from http://www.adlnet.org. Ahire, S. L., Landeros, R., & Golhar, D. Y. (1995). Total quality management: A review and an agenda for future research. Production and Operations Management, 4(3), 227–307.
Akkoyunlu, B., & Soylu, M. Y. (2008). A study of student’s perceptions in a blended learning environment based on different learning styles. Journal of Educational Technology & Society, 11(1), 183–193. Akyol, Z., & Garrison, D. R. (2008). The development of a community of inquiry over time in an online course: Understanding the progression and integration of social, cognitive and teaching presence. Journal of Asynchronous Learning Networks, 12(3), 3–22. Akyol, Z., & Garrison, D. R. (2011). Understanding cognitive presence in an online and blended community of inquiry: Assessing outcomes and processes for deep approaches to learning. British Journal of Educational Technology, 42(2), 233–250. doi:10.1111/j.14678535.2009.01029.x Akyol, Z., Garrison, D. R., & Ozden, M. Y. (2009). Online and blended communities of inquiry: Exploring the developmental and perceptional differences. [IRRODL]. International Review of Research in Open and Distance Learning, 10(6), 65–83. Akyol, Z., Vaughan, N., & Garrison, D. R. (in press). The impact of course duration on the development of a community of inquiry. Interactive Learning Environments. Alavi, M. (1994). Computer-mediated collaborative learning: An empirical evaluation. Management Information Systems Quarterly, 18, 159–174. doi:10.2307/249763 Alavi, M., & Gallupe, R. B. (2003). Using information technology in learning: Case studies in business and management education programs. Academy of Management Learning & Education, 2, 139–153.
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
Compilation of References
Alavi, M., Marakas, G. M., & Yoo, Y. (2002). A comparative study of distributed learning environments on learning outcomes. Information Systems Research, 13, 404–415. doi:10.1287/isre.13.4.404.72 Alavi, M. (1984). An assessment of the prototyping approach to Information Systems development. Communications of the ACM, 27(6), 556–563. doi:10.1145/358080.358095 Alavi, M. (1994). Computer-mediated collaborative VLEs, or can it take center stage, as in case distance learning: An empirical evaluation. Management Information Systems Quarterly, 18(2), 159–174. doi:10.2307/249763 Alexander, M. W., Perrault, H., Zhao, J. J., & Waldman, L. (2009). Comparing AACSB faculty and student online learning experiences: Changes between 2000 and 2006. Journal of Educators Online, 6(1). Retrieved February 1, 2009, from http://www.thejeo.com/Archives/ Volume6Number1/Alexanderetalpaper.pdf Allen, I. E., & Seaman, J. (2010). Learning on demand: Online education in the United States, 2009. Wellesley, MA: Babson Survey Research Group. Allen, I. E., Seaman, J., & Garrett, R. (2007). Blending in: The extent and promise of blended earning in the United States. Needham, MA: Sloan-C. Allen, I., & Seaman, J. (2008). Staying the course. Online education in the United States, 2008. Needham, MA: The Sloan Consortium. Allen, M., Bourhis, J., Burrell, N., & Mabry, E. (2002). Comparing student satisfaction with distance education to traditional classrooms in higher education: A metaanalysis. American Journal of Distance Education, 16(2), 83–97. doi:10.1207/S15389286AJDE1602_3 Al-Shammari, M. (2005). Assessing the learning experience in a business process reengineering (BPR) course at the University of Bahrain. Business Process Management Journal, 11(1), 47–62. doi:10.1108/14637150510578728 Altun, A., Gülbahar, Y., & Madran, O. (2008). Use of a content management system for blended learning: Perceptions of pre-service teachers. Turkish Online Journal of Distance Education-TOJDE, 9(4). Amabile, T. M. (1996). Creativity in context. Boulder, CO: Westview Press.
Amabile, T. M., Hadley, C. N., & Kramer, S. J. (2002). Creativity under the gun. Harvard Business Review, 52–61. Anderson, T. (2002). The hidden curriculum of distance education. Change, 33(6), 28–35. doi:10.1080/00091380109601824 Anderson, J. C., & Gerbing, D. W. (1988). Structural equation modeling in practice: A review and recommended two-step approach. Psychological Bulletin, 103(3), 411–423. doi:10.1037/0033-2909.103.3.411 Anderson, T. (2003). Modes of interaction in distance education: Recent developments and research questions. In Moore, M. G., & Anderson, W. G. (Eds.), Handbook of distance education. Anderson, T. (2004). Teaching in an online learning context. In T. Anderson & F. Elloumi, (Eds.), Theory & practice of online learning (pp. 173–194). Retrieved January 10, 2010, from http://cde.athabascau.ca/online_book/ contents.html Anstine, J., & Skidmore, M. (2005). A small sample study of traditional and online courses with sample selection adjustment. The Journal of Economic Education, 36, 107–127. AQIP. (2008). Principles and categories for improving academic quality. Retrieved December 3, 2009, from http://www.aqip.org/ Arbaugh, J. B. (2000a). Virtual classroom characteristics and student satisfaction in Internet-based MBA courses. Journal of Management Education, 24, 32–54. doi:10.1177/105256290002400104 Arbaugh, J. B. (2000b). How classroom environment and student engagement affect learning in Internet-based MBA courses. Business Communication Quarterly, 63(4), 9–26. doi:10.1177/108056990006300402 Arbaugh, J. B. (2001). How instructor immediacy behaviors affect student satisfaction and learning in Web-based courses. Business Communication Quarterly, 64(4), 42–54. doi:10.1177/108056990106400405 Arbaugh, J. B. (2002). Managing the on-line classroom: A study of technological and behavioral characteristics of Web-based MBA courses. The Journal of High Technology Management Research, 13, 203–223. doi:10.1016/ S1047-8310(02)00049-4
393
Compilation of References
Arbaugh, J. B. (2004). Learning to learn online: A study of perceptual changes between multiple online course experiences. The Internet and Higher Education, 7(3), 169–182. doi:10.1016/j.iheduc.2004.06.001 Arbaugh, J. B. (2005a). How much does subject matter matter? A study of disciplinary effects in Web-based MBA courses. Academy of Management Learning & Education, 4, 57–73. Arbaugh, J. B. (2005b). Is there an optimal design for online MBA courses? Academy of Management Learning & Education, 4, 135–149. Arbaugh, J. B. (2008). Does the community of inquiry framework predict outcomes in online MBA courses? International Review of Research in Open and Distance Learning, 9, 1–21. Arbaugh, J. B. (2010a). Online and blended business education for the 21st century: Current research and future directions. Oxford, UK: Chandos Publishing. Arbaugh, J. B. (2010b). Sage, guide, both, or neither? An exploration of instructor roles in online MBA courses. Computers & Education, 55, 1234–1244. doi:10.1016/j. compedu.2010.05.020 Arbaugh, J. B. (2010c). Do undergraduates and MBAs differ online? Initial conclusions from the literature. Journal of Leadership & Organizational Studies, 17, 129–142. doi:10.1177/1548051810364989 Arbaugh, J. B., Bangert, A., & Cleveland-Innes, M. (2010). Subject matter effects and the community of inquiry (CoI) framework: An exploratory study. The Internet and Higher Education, 13, 37–44. doi:10.1016/j.iheduc.2009.10.006 Arbaugh, J. B., & Benbunan-Fich, R. (2006). An investigation of epistemological and social dimensions of teaching in online learning environments. Academy of Management Learning & Education, 5, 435–447. Arbaugh, J. B., & Benbunan-Fich, R. (2007). Examining the influence of participant interaction modes in Webbased learning environments. Decision Support Systems, 43, 853–865. doi:10.1016/j.dss.2006.12.013
394
Arbaugh, J. B., Cleveland-Innes, M., Diaz, S., Garrison, D. R., Ice, P., & Richardson, J. (2008). Developing a community of inquiry instrument: Testing a measure of the Community of Inquiry framework using a multiinstitutional sample. The Internet and Higher Education, 11, 133–136. doi:10.1016/j.iheduc.2008.06.003 Arbaugh, J. B., Desai, A. B., Rau, B. L., & Sridhar, B. S. (2010). A review of research on online and blended learning in the management discipline: 1994-2009. Organization Management Journal, 7(1), 39–55. doi:10.1057/ omj.2010.5 Arbaugh, J. B., & Duray, R. (2002). Technological and structural characteristics, student learning and satisfaction with Web-based courses: An exploratory study of two MBA programs. Management Learning, 33, 231–247. doi:10.1177/1350507602333003 Arbaugh, J. B., Godfrey, M. R., Johnson, M., Leisen Pollack, B., Niendorf, B., & Wresch, W. (2009). Research in online and blended learning in the business disciplines: Key findings and possible future directions. The Internet and Higher Education, 12(2), 71–87. doi:10.1016/j. iheduc.2009.06.006 Arbaugh, J. B., & Hornik, S. C. (2006). Do Chickering and Gamson’s seven principles also apply to online MBAs? Journal of Educators Online, 3(2). Retrieved September 1, 2006, from http://www.thejeo.com/ Arbaugh, J. B., & Hwang, A. (2006). Does teaching presence exist in online MBA courses? The Internet and Higher Education, 9(1), 9–21. doi:10.1016/j.iheduc.2005.12.001 Arbaugh, J. B., Hwang, A., & Pollack, B. L. (2010). A review of research methods in online and blended business education: 2000-2009. In Eom, S. B., & Arbaugh, J. B. (Eds.), Student satisfaction and learning outcomes in e-learning: An introduction to empirical research. Hershey, PA: IGI Global. Arbaugh, J. B., & Rau, B. L. (2007). A study of disciplinary, structural, and behavioral effects on course outcomes in online MBA courses. Decision Sciences Journal of Innovative Education, 5(1), 65–95. doi:10.1111/j.15404609.2007.00128.x
Compilation of References
Arbaugh, J. B., & Warell, S. S. (2009). Distance learning and Web-based instruction in management education. In Armstrong (Ed.), The Sage handbook of management learning, education and development (vol. 7, pp. 231-254).
Baker, J. D. (2004). An investigation of relationships among instructor immediacy and affective and cognitive learning in the online classroom. The Internet and Higher Education, 7(1), 1–13. doi:10.1016/j.iheduc.2003.11.006
Ardichvili, A., Maurer, M., Li, W., Wentling, T., & Stuedemann, R. (2006). Cultural influences on knowledge sharing through online communities of practice. Journal of Knowledge Management, 10(1), 94–107. doi:10.1108/13673270610650139
Baker, R., & Yacef, K. (2009). The state of educational data mining in 2009: A review and future visions. Journal of Educational Data Mining, 1, 3–17.
Argyris, C., Putname, R., & Smith, D. (1985). Action science: Concepts, methods and skills for research and intervention. San Francisco, CA: Joessey-Bass. Armstrong, S. J., & Fukami, C. V. (2010). Self-assessment of knowledge: A cognitive or affective measure? Perspectives from the management learning and education community. Academy of Management Learning & Education, 9, 335–341. Ashill, N., & Jobber, D. (2009). Measuring state, effect and response uncertainty: Theoretical construct development and empirical validation. Journal of Management. Retrieved from http://jom.sagepub.com/cgi/content/ abstract/0149206308329968v1 ASQ. (2000). Quality assurance standards - guidelines for the application of ANSI/ISO/ASQ Q9001-2000 to education and training institutions. Milwaukee, WI: American Society for Quality. Atkinson, G. (1991). Kolb’s learning style inventory: A practitioner’s perspective. Measurement & Evaluation in Counseling & Development, 23(4), 149–161. Avison, D., Lau, F., Myers, M., & Nielsen, P. A. (1999). Action research. Communications of the ACM, 42(1), 94–97. doi:10.1145/291469.291479 Bagozzi, R. P. (1984). A prospectus for theory construction in marketing. Journal of Marketing, 48(1), 11–29. doi:10.2307/1251307 Bagozzi, R. P., & Baumgartner, H. (1994). The evaluation of structural equation models and hypothesis testing. In Bagozzi, R. P. (Ed.), Principles of marketing research (pp. 386–422). Cambridge, MA: Blackwell.
Bakerville, R. L. (1999). Investigating Information Systems with action research. Communications of the Association for Information Systems, 2(19). Bandi-Rao, S., Radtke, J., Holmes, A., & Davis, P. (2008). Keeping the human element at the center college-level writing online: Methods and materials. In C. Bonk, et al. (Eds.), Proceedings of World Conference on E-Learning in Corporate, Government, Healthcare, and Higher Education 2008 (p. 25). Chesapeake, VA: AACE Bandura, A. (1986). Social foundations of thought and action: A social cognitive theory. Englewood Cliffs, NJ: Prentice Hall. Bangert, A. W. (2008). The influence of teaching presence and social presence on the quality of online critical inquiry. Journal of Computing in Higher Education, 20(1), 34–61. Bangert, A. W., & Baumberger. (2005). Research designs and statistical techniques used in the Journal of Counseling & Development, 1990 -2001. Journal of Counseling and Development, 83, 480–487. Barak, M., & Dori, Y. J. (2004). Enhancing undergraduate students chemistry understanding through project-based learning in an IT environment. Science Education, 89(1), 117–139. doi:10.1002/sce.20027 Barclay, D., Higgins, C., & Thompson, R. (1995). The partial least squares (PLS) approach to causal modeling: Personal computer adoption and use as an illustration (with commentaries). Technology Studies, 2(2), 285–324. Barnes, F. B., Preziosi, R. C., & Gooden, D. J. (2004). An examination of the learning styles of online MBA students and their preferred course delivery methods. New Horizons in Adult Education, 18(2), 16–30. Barrick, M. R., & Mount, M. K. (1991). The BigFive personality dimensions and performance: A meta-analysis. Personnel Psychology, 44(1), 1–26. doi:10.1111/j.1744-6570.1991.tb00688.x
395
Compilation of References
Barrit, C., Lewis, D., & Wieseler, W. (1999). CISCO Systems reusable information object strategy version 3.0. Cisco Whitepaper. Retrieved on December 29, 2009, from http://www.cisco.com /warp/public/779/ibs/ solutions/ learning/ whitepapers/el_cisco_rio.pdf Barron, J. S., Schwartz, D. L., Vye, N. L., Moore, A., Petrosino, A., & Zech, L. (1998). Doing with understanding: Lessons from research on problem- and project-based learning. Journal of the Learning Sciences, 7(3&4), 271–311. doi:10.1207/s15327809jls0703&4_2 Barrón, H. S. (2004). La educación en línea en México. Edutec: Revista electrónica de tecnología educativa, 18. Retrieved from http://dialnet.unirioja.es /servlet/articulo? codigo=1064579 Barros, B., & Verdejo, M. F. (2000). Analyzing student interaction processes in order to improve collaboration: The degree approach. International Journal of Artificial Intelligence in Education, 11, 221–241. Bauer, K. W., & Liang, Q. (2003). The effect of personality and precollege characteristics on first-year activities and academic performance. Journal of College Student Development, 44, 277–290. doi:10.1353/csd.2003.0023 Becher, T. (1994). The significance of disciplinary differences. Studies in Higher Education, 19, 151–161. do i:10.1080/03075079412331382007 Becher, T., & Trowler, P. R. (2001). Academic tribes and territories (2nd ed.). Berkshire, UK: Society for Research into Higher Education & Open University Press. Beckwith, D. (1987). Group problem-solving via computer conferencing: The realizable potential. Canadian Journal of Educational Communication, 16(2), 89–106. Benbunan-Fich, R. (2002). Improving education and training with IT. Communications of the ACM, 45(6), 94–99. doi:10.1145/508448.508454 Benbunan-Fich, R., & Arbaugh, J. B. (2006). Separating the effects of knowledge construction and group collaboration in learning outcomes of Web-based courses. Information & Management, 43(6), 778–793. doi:10.1016/j. im.2005.09.001
396
Benbunan-Fich, R., & Hiltz, S. R. (2003). Mediators of the effectiveness of online courses. IEEE Transactions on Professional Communication, 46(4), 298–312. doi:10.1109/TPC.2003.819639 Benet-Martinez, V., & John, O. P. (1998). Los Cinco Grandes across cultures and ethnic groups: Multitraitmultimethod analyses of the Big-Five in Spanish and English. [from http://www.testmasterinc.com/products/]. Journal of Personality and Social Psychology, 75, 729– 750. Retrieved November 26, 2007. doi:10.1037/00223514.75.3.729 Bentler, P. M. (1990). Comparative fit indices in structural models. Psychological Bulletin, 107, 238–246. doi:10.1037/0033-2909.107.2.238 Bentler, P. M. (1995). EQS structural equations program annual, multivariate software. Encino, CA: Multivariate Software. Bentler, P. M., & Bonnet, D. C. (1980). Significance tests and goodness of fit in the analysis of covariance structures. Psychological Bulletin, 88(3), 588–606. doi:10.1037/0033-2909.88.3.588 Bereiter, C., & Scardamalia, M. (2003). Learning to work creatively with knowledge. In Corte, E. D., Verschaffel, L., Entwistle, N., & Merriënboer, J. V. (Eds.), Powerful learning environments: Unravelling basic components and dimensions (pp. 73–78). Oxford, UK: Elsevier Science. Berenson, R., Boyles, G., & Weaver, A. (2008). Emotional intelligence as a predictor for success in online learning. International Review of Research in Open and Distance Learning, 9(2), 1–16. Berge, Z. L. (1995). Facilitating computer conferencing: Recommendations from the field. Educational Technology, 35(1), 22–30. Bernard, R. M., Abrami, P. C., Lou, Y., & Borokhovski, E. (2004). A methodological morass? How can we improve quantitative research in distance education. Distance Education, 25(2), 175–198. doi:10.1080/0158791042000262094 Bernard, R. M., Abrami, P. C., Borokhovski, E., Wade, E., Tamim, R. M., Surkes, M. A., & Bethel, E. C. (2009). A meta-analysis of three types of interaction treatments in distance education. Review of Educational Research, 79, 1243–1289. doi:10.3102/0034654309333844
Compilation of References
Bernard, R. M., Abrami, P. C., Lou, Y., Borokhovski, E., Wade, A., & Wozney, L., W… Huang, B. (2004). How does distance education compare with classroom instruction? A meta-analysis of the empirical literature. Review of Educational Research, 74(3), 379–439. doi:10.3102/00346543074003379 Bernstein, I. H., Garbin, C. P., & McClellan, P. G. (1983). A confirmatory factoring of the California Psychological Inventory. Educational and Psychological Measurement, 43, 687–691. doi:10.1177/001316448304300302 Berry, R. W. (2002). The efficacy of electronic communication in the business school: Marketing students’ perceptions of virtual teams. Marketing Education Review, 12(2), 73–78. Bickel, R. (2007). Multilevel analysis for applied research: It’s just regression!New York, NY: Guilford Press. Biggs, J. B. (1993). From theory to practice: A cognitive system approach, higher education. Research for Development, 12, 73–85. Biglan, A. (1973). The characteristics of subject matter in different academic areas. The Journal of Applied Psychology, 57(3), 195–203. doi:10.1037/h0034701 Birch, A., & Irvine, V. (2009). Preservice teachers’ acceptance of ICT integration in the classroom: Applying the utaut model. Educational Media International, 46(4), 295–315..doi:10.1080/09523980903387506 Black, K. (2008). Business statistics for contemporary decision making (5th ed.). Hoboken, NJ: Wiley. Blend-XL. (2006). Progress report. (225552-CP-1-20051-NL-MINERVA-MPP). Retrieved Jan 5, 2010, from http://www.blend-xl.eu/files/publications/blend-xl%20 progress%20report.pdf Blili, S., Raymond, L., & Rivard, S. (1998). Impact of task uncertainty, end-user involvement and competence on the success of end-user computing. Information & Management, 33(3), 137–153. doi:10.1016/S03787206(97)00043-8 Blumenfeld, P., Soloway, E., Marx, R., Krajcik, J., Guzdial, M., & Palincsar, A. (1991). Motivating projectbased learning: Sustaining the doing, supporting the learning. Educational Psychologist, 26(3&4), 369–398. doi:10.1207/s15326985ep2603&4_8
Bocchi, J., Eastman, J. K., & Swift, C. O. (2004). Retaining the online learner: Profile of students in an online MBA program and implications for teaching them. Journal of Education for Business, 79, 245–253. doi:10.3200/ JOEB.79.4.245-253 Bodea, V. (2003). Standards for data mining languages. The Proceedings of the Sixth International Conference on Economic Informatics - Digital Economy, (pp. 502506). INFOREC Printing House, ISBN 973-8360-02-1, Bucureşti. Bodea, V. (2007). Application and benefits of knowledge management in universities – a case study on student performance enhancement. Informatics in Knowledge Society, The Proceedings of the Eight International Conference on Informatics in Economy, May 17-18, ASE Printing House, (pp. 1033-1038). Bodea, V. (2008). Knowledge management systems. Ph.D thesis, supervised by Prof. Ion Gh. Roşca, The Academy of Economic Studies, Bucharest. Bodea, V., & Roşca, I. (2007). Analiza performanţelor studenţilor cu tehnici de data mining: studiu de caz în Academia de Studii Economice din Bucureşti. In Bodea, C., & Andone, I. (Eds.), Managementul cunoaşterii în universitatea modernă. Editura Academiei de Studii Economice din Bucureşti. Boisier, S. (2005). ¿Hay espacio para el desarrollo local en la globalización? Revista de la CEPAL, 86, 47-62. Retrieved from http://dialnet.unirioja.es /servlet/articulo? codigo=1257248 Bollen, K. A. (1984). Multiple indicators: Internal consistency or no necessary relationship? Quality & Quantity, 18(4), 377–385. doi:10.1007/BF00227593 Bollen, K. A. (1989). A new incremental fit index for general structural models. Sociological Methods & Research, 17, 303–316. doi:10.1177/0049124189017003004 Bollen, K. A. (1989). Structural equations with latent variables. New York, NY: John Wiley and Sons. Bollen, K. A., & Lennox, R. (1991). Conventional wisdom on measurement: A structural equation perspective. Psychological Bulletin, 110(2), 305–314. doi:10.1037/00332909.110.2.305
397
Compilation of References
Bollen, K. A., & Long, J. S. (1993). Testing structural equation models. Newbury Park, CA: Sage Publications. Bonk, C. J., & Graham, C. R. (Eds.). (2006). The handbook of blended learning: Global perspectives, local designs. San Francisco, CA: Pfeiffer. Bonk, C. J., Kim, K. J., & Zeng, T. (2006). Future directions of blended learning in higher education and workplace learning settings. In Bonk, C. J., & Graham, C. R. (Eds.), Handbook of blended learning: Global perspectives, local designs (pp. 550–568). San Francisco, CA: Pfeiffer. Boomsma, A. (2000). Reporting analyses of covariance structures. Structural Equation Modeling, 7(3), 461–483. doi:10.1207/S15328007SEM0703_6 Bosco, M. D., & Barrón, H. (2008). La educación a distancia en México: Narrativa de una historia silenciosa. Mexico City, Mexico: UNAM. Boston, W., Diaz, S. R., Gibson, A. M., Ice, P., Richardson, J., & Swan, K. (2009). An exploration of the relationship between indicators of the community of inquiry framework and retention in online programs. Journal of Asynchronous Learning Networks, 13(3), 67–83. Botwin, M. D., & Buss, D. M. (1989). Structure of actreport data: Is the five-factor model of personality recaptured? Journal of Personality and Social Psychology, 56, 988–1001. doi:10.1037/0022-3514.56.6.988 Bouckaert, R., Frank, E., Hall, M., Kirkby, R., Reutemann, P., Seewald, A., & Scuse, D. (2010). WEKA manual for version 3-6-2. University of Waikato, Hamilton, New Zealand Brew, L. S. (2008). The role of student feedback in evaluating and revising a blended learning course. The Internet and Higher Education, 11, 98–105. doi:10.1016/j. iheduc.2008.06.002 Briggs-Myers, I. (1980). Introduction to type (3rd ed.). Palo Alto, CA: Consulting Psychologists Press. Brodke, M. H., & Mruk, C. J. (2009). Crucial components of online teaching success: A review and illustrative case study. AURCO Journal, 15, 187–205.
398
Brook, C., & Oliver, R. (2007). Exploring the influence of instructor actions on community development in online settings. In Lambropoulos, N., & Zaphiris, P. (Eds.), User-centered design of online learning communities. Hershey, PA: Idea Group. Brown, B. J., & Mundrake, G. A. (2007). Proof of student achievement: Assessment for an evolving business education curriculum. 2007 NBEA Yearbook, 45, 130-145. Brown, B. W., & Liedholm, C. E. (2002). Can Web courses replace the classroom in principles of microeconomics? The American Economic Review, 92(2), 444–448. doi:10.1257/000282802320191778 Brown, K. (2001). Using computers to deliver training: Which employees learn and why? Personnel Psychology, 54, 271–296. doi:10.1111/j.1744-6570.2001.tb00093.x Brown, R. (2003). Blending learning: Rich experiences from a rich picture. Training and Development in Australia, 30(3), 14–17. Bryant, S. M., Kahle, J. B., & Schafer, B. A. (2005). Distance education: A review of the contemporary literature. Issues in Accounting Education, 20, 255–272. doi:10.2308/iace.2005.20.3.255 Bulu, S. T., & Yildirim, Z. (2008). Communication behaviors and trust in collaborative online teams. Journal of Educational Technology & Society, 11(1), 132–147. Byrne, B. M. (1998). Structural equation modeling with LISREL, PRELIS, and SIMPLIS: Basic concepts, applications, and programming. Mahwah, NJ: Lawrence Erlbaum. Byrne, B. M. (2010). Structural equation modeling with AMOS: Basic concepts, applications, and programming (2nd ed.). New York, NY: Routledge Academic. Byrne, B. (1994). Structural equation modeling with EQS and EQS/Windows. Thousand Oaks, CA: Sage Publications. Byrne, R. (2002). Web-based learning versus traditional management development methods. Singapore Management Review, 24(2), 59–68. Cabero, J. (2006). Bases pedagógicas del e-learning. Revista de Universidad y Sociedad del Conocimiento, 3(1), 1-10. Retrieved from http://www.uoc.edu/rusc/ 3/1/ dt/esp/cabero.pdf
Compilation of References
Cameron, B. A., Morgan, K., Williams, K. C., & Kostelecky, K. L. (2009). Group projects: Student perceptions of the relationship between social tasks and a sense of community in online group work. American Journal of Distance Education, 23, 20–33. doi:10.1080/08923640802664466 Campbell, D. T., & Stanley, J. C. (1966). Experimental and quasi-experimental designs for research. Chicago, IL: Rand McNally. Campo, M. A., & De Vrieze, L. (2008). Instructional design model: Blending digital technology in online learning. In J. Luca & E. Weippl (Eds.), Proceedings of World Conference on Educational Multimedia, Hypermedia and Telecommunications 2008 (pp. 2324-2329). Chesapeake, VA: AACE. Canfield, A. A. (1992). Canfield learning styles inventory manual. Los Angeles, CA: Western Psychological Services. Cardler, J. (1997). Summary of current research and evaluation of findings on technology in education. Working Paper, Educational Support Systems, San Mateo, CA. Carrell, L. J. (2010). Thanks for asking: A (red-faced?) response from communication. Academy of Management Learning & Education, 9, 300–304. Carver, C. A., Howard, R. A., & Lane, W. D. (1999). Addressing different learning styles through course hypermedia. IEEE Transactions on Education, 42. Caspi, A., Chajut, E., Saporta, K., & Marom, R. B. (2006). The influence of personality on social participation in learning environments. Learning and Individual Differences, 16, 129–144. doi:10.1016/j.lindif.2005.07.003 Cassel, C, M., Hackl, P., & Westlund, A.H. (2000). On measurement of intangible assets: A study of robustness of partial least squares. Total Quality Management, 11(7), S897–S907. doi:10.1080/09544120050135443 Castells, M. (2002). La era de la información: Economía sociedad y cultura. La sociedad red (4 ed.). México: Siglo XXI Ediciones. Cetindamar, D., & Olafsen, R. N. (2005). E-learning in a competitive firm setting. Innovations in Education and Teaching International, 42(4), 325–335. doi:10.1080/14703290500062581
Chamorro-Premuzic, T., & Furnham, A. (2003a). Personality traits and academic examination performance. European Journal of Personality, 17, 237–250. doi:10.1002/ per.473 Chamorro-Premuzic, T., & Furnham, A. (2003b). Personality predicts academic performance: Evidence from two longitudinal university samples. Journal of Research in Personality, 37, 319–338. doi:10.1016/ S0092-6566(02)00578-0 Chamorro-Premuzic, T., & Furnham, A. (2005). Personality and intellectual competence. Mahwah, NJ: Lawrence Erlbaum Associates. Chamorro-Premuzic, T., & Furnham, A. (2006). Intellectual competence and the intelligent personality: A third way in differential psychology. Review of General Psychology, 10(3), 251–267. doi:10.1037/1089-2680.10.3.251 Chamorro-Premuzic, T., & Funham, A. (2009). Mainly openness: The relationship between the Big-Five personality traits and learning approaches. Learning and Individual Differences, 19, 524–529. doi:10.1016/j. lindif.2009.06.004 Chapman, C., Clinton, J., & Kerber, R (2005). CRISP-DM 1.0, step-by-step data mining guide. Charpentier, M., Lafrance, C., & Paquette, G. (2006). International e-learning strategies: Key findings relevant to the Canadian context. Retrieved from http://www. ccl-cca.ca/pdfs/CommissionedReports/JohnBissInternationalELearningEN.pdf Chartier, B., & Gibson, B. (2007). Project-based learning: A search and rescue UAV – perceptions of an undergraduate engineering design team: A preliminary study. Proceedings of the 2007 AaeE Conference, Melbourne. Chau, P. Y. K. (1997). Re-examining a model for evaluating information center success using a structural equation modeling approach. Decision Sciences, 28(2), 309–334. doi:10.1111/j.1540-5915.1997.tb01313.x Chen, C. C., & Jones, K. T. (2007). Blended learning vs. traditional classroom settings: Assessing effectiveness and student perceptions in an MBA accounting course. Journal of Educators Online, 4(1), 1–15.
399
Compilation of References
Cheung, L. L. W., & Kan, A. C. N. (2002). Evaluation of factors related to student performance in a distance-learning business communication course. Journal of Education for Business, 77, 257–263. doi:10.1080/08832320209599674 Chew, E., Jones, N., & Turner, D. (2008). Critical review of the blended learning models based on Maslow’s and Vygotsky’s educational theory. In J. Fong, et al. (Eds.), Hybrid learning and education, (LNCS 5169, pp. 40-53). Chiasson, M., Germonprez, M., & Mathiassen, L. (2008). Pluralist action research: A review of the Information Systems literature. Information Systems Journal, 19, 31–54. doi:10.1111/j.1365-2575.2008.00297.x Chin, W. W. (1995). Open peer commentary on Barclay, D. Higgins, C. & Thompson, R. The partial least squares (PLS) approach to causal modeling: Personal computer adoption and use as an illustration. Technology Studies, 2(2), 310–319. Chin, W. W. (1998). The partial least squares approach to structural equation modeling. In Marcoulides, G. A. (Ed.), Modern methods for business research (pp. 295–336). Mahwah, NJ: Lawrence Erlbaum Associates.
Cleary, T. S. (2001). Indicators of quality. Planning for Higher Education, 29(3), 19–28. Cohen, J. (1988). Statistical power analysis for the behavioral sciences (2nd ed.). Hillsdale, NJ: Erlbaum. Cohen, P., Cohen, J., Teresi, J., Marchi, M., & Velez, C. N. (1990). Problems in the measurement of latent variables in structural equations causal models. Applied Psychological Measurement, 14(2), 183–196. doi:10.1177/014662169001400207 Collis, B. (1996). Tele-learning in a digital world: The future of distance learning. London, UK: International Thompson Computer Press. Compeau, D. R., & Higgins, C. A. (1995). Application of social cognitive theory to training for computer skills. Information Systems Research, 6(2), 118–143. doi:10.1287/isre.6.2.118 Conard, M. A. (2006). Aptitude is not enough: How personality and behavior predict academic performance. Journal of Research in Personality, 40, 339–346. doi:10.1016/j. jrp.2004.10.003
Chin, W. W., & Gopal, A. (1995). Adoption intention in GSS: Relative importance of beliefs. The Data Base for Advances in Information Systems, 26(2/3), 42–64.
Connolly, M., Jones, C., & Jones, N. (2007). New approaches, new vision: Capturing teacher experiences in a brave new online world. Open Learning, 22(1), 43–56. doi:10.1080/02680510601100150
Chin, W. W., & Todd, P. A. (1995). On the use, usefulness and ease of use of structural equation modeling in MIS research: A note of caution. Management Information Systems Quarterly, 19(2), 237–246. doi:10.2307/249690
Cook, T. D., & Campbell, D. T. (1979). Quasi-experimentation: Design and analysis issues for field settings. Boston, MA: Houghton-Mifflin.
Churchill, G. A. Jr. (1999). Marketing research: Methodological foundations (7th ed.). Orlando, FL: The Dryden Press.
Corno, L. (1986). The metacognitive control components of self-regulated learning. Contemporary Educational Psychology, 11, 333–346. doi:10.1016/0361476X(86)90029-9
Clark, R. C., & Mayer, R. E. (2008). E-learning and the science of instruction: Proven guidelines for consumers and designers of multimedia learning (2nd ed.). San Francisco, CA: Pfeiffer. Clark, R. E. (1983). Reconsidering research on learning from media. Review of Educational Research, 53, 445–460. Clark, R. E. (1994). Media will never influence learning. Educational Technology Research and Development, 42, 21–29. doi:10.1007/BF02299088
400
Costa, P. T. Jr, & McCrae, R. R. (1988). From catalog to classification: Murray’s needs and the five-factor model. Journal of Personality and Social Psychology, 55, 258–265. doi:10.1037/0022-3514.55.2.258 Costa, P. T. Jr, & McCrae, R. R. (1992). Revised NEO Personality Inventory (NEO-PI-R) and NEO Five-Factor Inventory (NEO-FFI) professional manual. Odessa, FL: Psychological Assessment Resources. Crisp-dm. (2010). CRoss Industry Standard Process for Data Mining. Retrieved from http://www.crisp-dm.org/
Compilation of References
Cronje, J. C. (2006). Pretoria to Khartoum - how we taught an Internet-supported Masters’ programme across national, religious, cultural and linguistic barriers. Journal of Educational Technology & Society, 9(1), 276–288.
Davis, R., & Wong, D. (2007). Conceptualizing and measuring the optimal experience of the e-learning environment. Decision Sciences Journal of Innovative Education, 5, 97–126. doi:10.1111/j.1540-4609.2007.00129.x
Crowley, S. L., & Fan, X. (1997). Structural equation modeling: Basic concepts and applications in personality assessment research. Journal of Personality Assessment, 68(3), 508–531. doi:10.1207/s15327752jpa6803_4
Daymont, T., & Blau, G. (2008). Student performance in online and traditional sections of an undergraduate management course. Journal of Behavioral and Applied Management, 9, 275–294.
Dacko, S. G. (2001). Narrowing skill development gaps in Marketing and MBA programs: The role of innovative technologies for distance learning. Journal of Marketing Education, 23, 228–239. doi:10.1177/0273475301233008
De Fruyt, F., & Mervielde, I. (1996). Personality and interests as predictors of educational streaming and achievement. European Journal of Personality, 10, 405–425. doi:10.1002/(SICI)1099-0984(199612)10:5<405::AIDPER255>3.0.CO;2-M
Dagada, R., & Jakovljevic, M. (2004). Where have all the trainers gone? E-learning strategies and tools in the corporate training environment. South African Institute of Computer Scientists and Information Technologists on IT Research in Developing Countries. ACM International Conference Proceeding Series, (pp. 194-203). Stellenbosch, Western Cape, South Africa. Damanpour, F. (1991). Organizational innovation: A meta-analysis of effects of determinants and moderators. Academy of Management Journal, (September): 555–590. doi:10.2307/256406 D’Andrea, V., & Gosling, D. (2005). Improving teaching and learning in higher education: A whole institution approach. Society for Research into Higher Education & Open University Press. D’Andrea, V. (2007). Improving teaching and learning in higher education: Can learning theory add value to quality reviews? In Westerheijden, D. F., Stensaker, B., & Rosa, M. J. (Eds.), Quality assurance in higher education (pp. 209–223). The Netherlands: Springer. doi:10.1007/9781-4020-6012-0_8 Davenport, T. (2001). Successful knowledge management projects. Sloan Management Review, 39(2). Davis, F. (1989). Perceived usefulness, perceived ease of use and user acceptance of information technology. Management Information Systems Quarterly, 13, 319–340. doi:10.2307/249008 Davis, F. D., Bagozzi, R. P., & Warshaw, P. R. (1989). User acceptance of computer technology: A comparison of two theoretical models. Management Science, 35(8), 982–1003. doi:10.1287/mnsc.35.8.982
Dean, J. W., & Bowen, D. E. (1994). Management theory and total quality: Improving research and practice through theory development. Academy of Management Review, 19(3), 392–418. doi:10.2307/258933 Dehler, G. E., Beatty, J. E., & Leigh, J. S. A. (2010). From good teaching to scholarly teaching: Legitimizing management education and learning scholarship. In Wankel, C., & DeFillippi, R. (Eds.), Being and becoming a management education scholar (pp. 95–118). Charlotte, NC: Information Age Publishing. Delavari, N., Beikzadeh, M. R., & Amnuaisuk, S. K. (2005). Application of enhanced analysis model for data mining processes in higher educational system. Proceedings of ITHET 6th Annual International Conference, Juan Dolio, Dominican Republic. Delavari, N., Beikzadeh, M. R., & Shirazi, M. R. A. (2004). A new model for using data mining in higher educational system. Proceedings of 5th International Conference on Information Technology based Higher Education and Training: ITEHT ’04, Istanbul, Turkey. DeLone, W. H., & McLean, E. R. (1992). Information System success: The quest for the dependent variable. Information Systems Research, 3(1), 60–95. doi:10.1287/ isre.3.1.60 DeLone, W. H., & McLean, E. R. (2003). The DeLone and McLean model of Information Systems success: A ten-year update. Journal of Management Information Systems, 194(4), 9–30.
401
Compilation of References
Dembo, M., & Eaton, M. (2000). Self regulation of academic learning in middle-level schools. The Elementary School Journal, 100(5), 473–490. doi:10.1086/499651
Digman, J. M. (1990). Personality structure: Emergence of the five-factor model. Annual Review of Psychology, 41, 417–440. doi:10.1146/annurev.ps.41.020190.002221
Dempsey, J. V., & Van Eck, R. N. (2002). Instructional design online: Evolving expectations. In Reiser, R. A., & Dempsey, J. V. (Eds.), Trends and issues in instructional design and technology (pp. 281–294). Upper Saddle River, NJ: Merrill Prentice-Hall.
Digman, J. M., & Inouye, J. (1986). Further specification of the five robust factors of personality. Journal of Personality and Social Psychology, 50, 116–123. doi:10.1037/0022-3514.50.1.116
DeRouin, R. E., Fritzsche, B. A., & Salas, E. (2004). Optimizing e-learning: Research-based guidelines for learner-controlled training. Human Resource Management, 43(2-3), 147–162. doi:10.1002/hrm.20012 DeRouin, R. E., Fritzsche, B. A., & Salas, E. (2005). Elearning in organizations. Journal of Management, 31(6), 920–940. doi:10.1177/0149206305279815 DeVellis, R. F. (1991). Scale development theory and applications. Applied Research Methods Series (Vol. 16). Sage Publications. Dew, J. R., & Nearing, M. M. (2004). Continuous quality improvement in higher education. Westport, CT: Praeger Publishers. Dewey, J. (1938). Experience and education. New York, NY: Macmillan. Diamantopoulos, A., & Siguaw, J. A. (2000). Introducing LISREL. London, UK: Sage Publications. Diamantopoulos, A. (1999). Export performance measurement: Reflective versus formative indicators. International Marketing Review, 16(6), 444–457. doi:10.1108/02651339910300422 Diamantopoulos, A., & Winklhofer, H. (2001). Index construction with formative indicators: An alternative to scale development. JMR, Journal of Marketing Research, 38(2), 269–277. doi:10.1509/jmkr.38.2.269.18845
Digman, J. M., & Takemoto-Chock, N. K. (1981). Factors in the natural language of personality: Re-analysis, comparison, and interpretation of six major studies. Multivariate Behavioral Research, 16, 149–170. doi:10.1207/ s15327906mbr1602_2 Dillon, C. L., & Walsh, S. M. (1992). Faculty: The neglected resource in distance education. American Journal of Distance Education, 6(3), 5–21. doi:10.1080/08923649209526796 Dobbs, R. R., Waid, C. A., & del Carmen, A. (2009). Students’ perceptions of online courses: The effect of online course experience. The Quarterly Review of Distance Education, 10(1), 9–26. Doll, W. J., & Torkzadeh, G. (1988). The measurement of end user computing satisfaction. Management Information Systems Quarterly, 12(2), 259–274. doi:10.2307/248851 Doppelt, Y. (2007). Assessing creative thinking in designbased learning. International Journal of Technology and Design Education, 19(1), 55–65. doi:10.1007/s10798006-9008-y Drago, W., & Peltier, J. (2004). The effects of class size on the effectiveness of online courses. Management Research News, 27(10), 27–41. doi:10.1108/01409170410784310 Drago, W., Peltier, J., Hay, A., & Hodgkinson, M. (2005). Dispelling the myths of online education: Learning via the information superhighway. Management Research News, 28(6/7), 1–17. doi:10.1108/01409170510784904
Diaz, M. C., & Loraas, T. (2010). Learning new uses of technology while on an audit engagement: Contextualizing general models to advance pragmatic understanding. International Journal of Accounting Information Systems, 11(1), 61–77..doi:10.1016/j.accinf.2009.05.001
Drago, W., Peltier, J., & Sorensen, D. (2002). Course content or the instructor: Which is more important in online teaching? Management Research News, 25(6/7), 69–83. doi:10.1108/01409170210783322
Dielman, T. E. (1996). Applied regression analysis for business and economics (2nd ed.). Belmont, CA: Wadsworth Publishing Company.
Driver, M. (2002). Exploring student perceptions of group interaction and class satisfaction in the Web-enhanced classroom. The Internet and Higher Education, 5(1), 35–45. doi:10.1016/S1096-7516(01)00076-8
402
Compilation of References
Duarte, D., & Tennant Snyder, N. (2001). Mastering virtual teams (2nd ed.). San Francisco, CA: Jossey-Bass. Duckworth, A. L., Peterson, C., Matthews, M. D., & Kelly, D. R. (2007). Grit: Perseverance and passion for long-term goals. Journal of Personality and Social Psychology, 92, 1087–1101. doi:10.1037/0022-3514.92.6.1087 DuFrene, D. D., Lehman, C. M., Kellermanns, F. W., & Pearson, R. A. (2009). Do business communication technology tools meet learner needs? Business Communication Technology Quarterly, 72(2), 146–162. doi:10.1177/1080569909334012 Dunn, R., Beaudry, J., & Klavas, A. (1989). Survey research on learning styles. Educational Leadership, 46, 50–58. Dunn, R., Beaudry, J., & Klavas, A. (2002). Survey of research on learning styles. California Journal of Science Education, 2(2), 75–79. Dunn, R., & Dunn, K. (1999). The complete guide to the learning styles inservice system. Allyn and Bacon. Duval, E., & Hodgins, W. (2003). A LOM research agenda. In G. Hencsey, B. White, Y. Chen, L. Kovacs, & S. Lawrence (Eds.) Proceedings of the 12th International Conference on World Wide Web, Budapest, Hungary, (pp. 659–667). Duyck, P., Pynoo, B., Devolder, P., Voet, T., Adang, L., & Vercruysse, J. (2008). User acceptance of a picture archiving and communication system. Applying the unified theory of acceptance and use of technology in a radiological setting. Methods of Information in Medicine, 47(2), 149–156. Dykman, C. A., & Davis, C. K. (2008a). Online education forum part two – teaching online versus teaching conventionally. Journal of Information Systems Education, 19, 157–164. Dykman, C. A., & Davis, C. K. (2008b). Online education forum part three – a quality online educational experience. Journal of Information Systems Education, 19, 281–289. Dziuban, C., Moskal, P., Brophy, J., & Shea, P. (2007). Student satisfaction with asynchronous learning. Journal of Asynchronous Learning Networks, 11(1), 87–95.
Dziuban, C. D., Moskal, P. D., & Hartman, J. (2005). Higher education, blended learning, and the generations: Knowledge is power: No more. In Bourne, J., & Moore, J. C. (Eds.), Elements of quality online education: Engaging communities. Needham, MA: Sloan Center for Online Education. Easton, S. S. (2003). Clarifying the instructor’s role in online distance learning. Communication Education, 52(2), 87. doi:10.1080/03634520302470 Eaton, J. S. (2003). Before you bash accreditation, consider the alternatives. The Chronicle of Higher Education, 49(25), B15. ECAR. (2007). The ECAR study of undergraduate students and information technology, (study 6). EDUCAUSE Center for Applied Research. EDUCAUSE. (2009). Learning initiative. Retrieved on December 29, 2009, from http://www.educause.edu/ Edwards, J. R. (2001). Multidimensional constructs in organizational behaviour research: An integrative analytical framework. Organizational Research Methods, 4(2), 144–192. doi:10.1177/109442810142004 Efron, B., & Tibshirani, R. J. (1993). An introduction to the bootstrap, monographs on statistics and applied probability. New York, NY: Chapman and Hall. Egan, T. M. (2005). Factors influencing individual creativity in the workplace: An examination of quantitative empirical research. Advances in Developing Human Resources, 160–181. doi:10.1177/1523422305274527 Ehlers, U. (2009). Understanding quality culture. Quality Assurance in Education, 17(4), 343–363. doi:10.1108/09684880910992322 El Emam, K., Drouin, J.-N., & Melo, W. (1998). SPICE: The theory and practice of software process improvement and capability determination. California: IEEE Computer Society. Elearning Africa. (2006). Elearning Africa report. Retrieved August 29, 2009, from http://www.elearningafrica.com/ pdf/ report/ postreport_eLA2006.pdf ELEARNSPACE. (2005). Blended. Retrieved January 5, 2010, from http://www.elearnspace.org/doing/blended. htm
403
Compilation of References
Ellis, R. K. (2009). A field guide to learning management systems. ASTD Learning Circuits. Retrieved Mar 5, 2009, from http://www.astd.org/NR/rdonlyres/12ECDB993B91-403E-9B15-7E597444645D/23395/LMS_fieldguide_20091.pdf Elshout, J. J., & Akkerman, A. E. (1975). Vijf Persoonlijkheids-faktoren test 5 PFT. Nijmegen: Berkhout BV. Engelbrecht, E. (2003). E-learning – from hype to reality. Progressio, 25(1). Engwall, L. (2007). The anatomy of management education. Scandinavian Journal of Management, 23, 4–35. doi:10.1016/j.scaman.2006.12.003 Enríquez Alvarez, A., Cortés Hernández, A. O., Ortiz Boza, A., Zavala Hernández, C., Gallardo Vallejo, C., & Bernal López, E. (2001). Diagnóstico de la educación superior a distancia. Mexico City: ANUIES. Eom, S. B. (2004). Personal communication with Richard Lomax in regard to the use of T value in structural equation modeling through e-mail Eom, S. B., Wen, H. J., & Ashill, N. (2006). The determinants of students’ perceived learning outcomes and satisfaction in university online education: An empirical investigation. Decision Sciences Journal of Innovative Education, 4(2), 215–235. doi:10.1111/j.15404609.2006.00114.x Ertmer, P. A., & Stepich, D. A. (2004). Examining the relationship between higher-order learning and students’ perceived sense of community in an online learning environment. Proceedings of the 10th Australian World Wide Web conference, Gold Coast, Australia. Esposito Vinzi, V., Chin, W., Henseler, J., & Wang, H. (2010). Handbook of partial least squares. Heidelberg, Germany: Springer. doi:10.1007/978-3-540-32827-8 European Commission. (2005). Mobilizing the brainpower of Europe: Enabling universities to make their full contribution to the Lisbon Strategy. Brussels, Communicate no. 152. Eurostat. (2009). The Bologna Process in higher education in Europe: Key indicators on the social dimension and mobility. European Communities and IS, HochschulInformations-System G mbH. Retrieved from http:// epp.eurostat.ec.europa.eu/portal/page/portal/education/ bologna_process 404
Fainholc, B., & Scagnoli, N. (2009). Blended learning through interuniversity collaboration interaction. Paper presented at 23rd ICDE World Conference on Open Learning and Distance Education. Retrieved February 4, 2010, from http://www.ou.nl/Docs/Campagnes/ICDE2009/ Papers/Final_Paper_134Fainholc.pdf Falk, R. F., & Miller, N. B. (1992). A primer for soft modeling. Akron, OH: University of Akron Press. Falowo, R. O. (2007). Factors impeding implementation of Web-based distance learning. AACE Journal, 15(3), 315–338. Farsides, T., & Woodfield, R. (2003). Individual differences and undergraduate academic success: The roles of personality, intelligence, and application. Personality and Individual Differences, 34, 1225–1243. doi:10.1016/ S0191-8869(02)00111-3 Felder, R. M., & Bent, R. (2005). Understanding student differences. Journal of Engineering Education, 94(1), 57–72. Fernandes, E., Madhour, H., Miniaoui, S., & Forte, M. W. (2005). Phoenix Tool: A support to semantic learning model. Workshop on Applications of SemanticWeb Technologies for e-Learning (SWEL@ICALT’05), Kaohsiung, Taiwan, 5–8 July 2005. Feuer, M. J., Towne, L., & Shavelson, R. J. (2002). Scientific culture and educational research. Educational Researcher, 31(8), 4–14. doi:10.3102/0013189X031008004 Fidishun, D. (2010). Andragogy and technology: Integrating adult learning theory as we teach with technology. Retrieved on March 25th, 2010 from http://frank.mtsu. edu/ ~itconf/ proceed00/ fidishun.htm Fishbein, M., & Ajzen, I. (1975). Belief, attitude, intention and behavior: An introduction to theory and research. Reading, MA: Addison-Wesley. Fleming, N. D., & Mills, C. (1992). Not another inventory, rather a catalyst for reflection. To Improve the Academy, 11, 137-155. Fornell, C. R. (1982). A second generation of multivariate analysis (Vol. 1). New York, NY: Praeger.
Compilation of References
Fornell, C. R. (1987). A second generation of multivariate analysis: Classification of methods and implications for marketing research. In Houston, M. J. (Ed.), Review of marketing (pp. 407–450). Chicago, IL: American Marketing Association. Fornell, C. R., & Barclay, D. (1993). Jacknifing: A supplement to Lohmoller’s lvpls program. Ann Arbor, MI: University of Michigan Press. Fornell, C. R., & Bookstein, F. L. (1982). Two structural equation models: LISREL and PLS applied to consumer exit-voice theory. JMR, Journal of Marketing Research, 19(4), 440–452. doi:10.2307/3151718 Fornell, C. R., & Cha, J. (1994). Partial least squares. In Bagozzi, R. P. (Ed.), Advanced methods of marketing research (pp. 52–78). Cambridge, MA: Blackwell. Fornell, C. R., & Larcker, D. (1981). Evaluating structural equation models with unobservable variables and measurement error. JMR, Journal of Marketing Research, 18(1), 39–50. doi:10.2307/3151312 Fornell, C. R., & Larcker, D. F. (1981). Structural equation models with unobservable variables and measurement error. JMR, Journal of Marketing Research, 18(3), 382–388. doi:10.2307/3150980 Fornell, C. R., Lorange, P., & Roos, J. (1990). The cooperative venture formation process: A latent variable structural modeling approach. Management Science, 36(10), 1246–1255. doi:10.1287/mnsc.36.10.1246 Fornell, C., Tellis, G., & Zinkhan, G. (1982). Validity assessment: A structural equation approach using partial last squares. In Walker, B. (Eds.), An assessment of marketing thought and practice (pp. 405–409). Chicago, IL: American Marketing Association. Fornell, C. R., & Yi, Y. (1992). Assumptions of the two-step approach to latent variable modeling. Sociological Methods & Research, 20(3), 291–319. doi:10.1177/0049124192020003001 Forster, D. A., Dawson, V. M., & Reid, D. (2005). Measuring preparedness to teach with ICT. Australasian Journal of Educational Technology, 21(1), 1–18. Frailey, D., McNell, E., & Mould, D. (2000). Forum: Debating distance learning. Communications of the ACM, 43(2), 11–15.
Frank, M., Lavy, I., & Elata, D. (2003). Implementing the project-based learning approach in an academic engineering course. International Journal of Technology and Design Education, 13, 273–288. doi:10.1023/A:1026192113732 Frank, T. (2004). Making the grade keeps getting harder. Retrieved October 29, 2009, from http://www.csmonitor. com/ 2004/ 0113/ p11s01-legn.htm Friedman, J. H. (1997). Data mining and statistics: What’s the connection?Stanford, CA: Standford University. Fry, K. (2001). E-learning markets and providers: Some issues and prospects. Education + Training, 233-239. Fullford, C. P., & Zhang, S. (1993). Perceptions of interaction: The critical predictor in distance education. American Journal of Distance Education, 7(3), 8–21. doi:10.1080/08923649309526830 Furnham, A., & Chamorro-Premuzic, T. (2004). Personality and intelligence as predictors of statistics examination grades. Personality and Individual Differences, 37, 943–955. doi:10.1016/j.paid.2003.10.016 Furnham, A., Chamorro-Premuzic, T., & McDougall, F. (2003). Personality, cognitive ability, and beliefs about intelligence as predictors of academic performance. Learning and Individual Differences, 14, 49–66. Furnham, A., Christopher, A., Garwood, J., & Martin, G. (2007). Approaches to learning and the acquisition of general knowledge. Personality and Individual Differences, 43, 1563–1571. doi:10.1016/j.paid.2007.04.013 Galusha, J. M. (1997). Barriers to learning in distance education. Interpersonal Computing and Technology: An Electronic Journal for the 21st Century, 5(3/4), 6-14. Gardner, H. (1983). Frames of mind: The theory of multiple intelligences. New York, NY: Basic Books. Gardner, W. L., & Martinko, M. J. (1996). Using the Myers-Briggs type indicator to study managers: A literature review and research agenda. Journal of Management, 22(1), 45–83. doi:10.1177/014920639602200103 Garrido Noguera, C., & Thirión, J. M. (2006). La educación virtual en México: Universidades y aprendizaje tecnológico. In Garrido Noguera, C. (Ed.), El uso de las tecnologías de comunicación e información en la educación superior. Experiencias internacionales (pp. 97–111). Mexico City, Mexico: ELAC.
405
Compilation of References
Garrison, D. R. (2009). Communities of inquiry in online learning: Social, teaching and cognitive presence. In Howard, C. (Eds.), Encyclopedia of distance and online learning (2nd ed., pp. 352–355). Hershey, PA: IGI Global. Garrison, D. R., & Anderson, T. (2003). E-learning in the 21st century: A framework for research and practice. London, UK: Routledge/Falmer. doi:10.4324/9780203166093 Garrison, D. R., Anderson, T., & Archer, W. (2000). Critical inquiry in a text-based environment: Computer conferencing in higher education. The Internet and Higher Education, 2, 87–105. doi:10.1016/S1096-7516(00)00016-6 Garrison, D. R., Anderson, T., & Archer, W. (2001). Critical thinking, cognitive presence, and computer conferencing in distance education. American Journal of Distance Education, 15(1), 7–23. doi:10.1080/08923640109527071 Garrison, D. R., & Arbaugh, J. B. (2007). Researching the community of inquiry framework: Review, issues and future directions. The Internet and Higher Education, 10(3), 157–172. doi:10.1016/j.iheduc.2007.04.001 Garrison, D. R., & Cleveland-Innes, M. (2004). Critical factors in student satisfaction and success: Facilitating student role adjustment in online communities of inquiry. In J. Bourne & J. C. Moore (Eds), Elements of quality online education: Into the mainstream - volume 5 in the Sloan-C series (p. 29-38). Needham, MA: Sloan Center for Online Education. Garrison, D. R., Cleveland-Innes, M., & Fung, T. S. (2010). Exploring causal relations among teaching, cognitive and social presence: A holistic view of the community of inquiry framework. The Internet and Higher Education, 13(1-2), 31–36. doi:10.1016/j.iheduc.2009.10.002 Garrison, D. R., Cleveland-Innes, M., Koole, M., & Kappelman, J. (2006). Revisiting methodological issues in the analysis of transcripts: Negotiated coding and reliability. The Internet and Higher Education, 9(1), 1–8. doi:10.1016/j.iheduc.2005.11.001 Garrison, D. R., & Kanuka, H. (2004). Blended learning: Uncovering its transformative potential in higher education. The Internet and Higher Education, 7, 95–105. doi:10.1016/j.iheduc.2004.02.001
406
Garrison, D. R., & Vaughan, N. D. (2008). Blended learning in higher education: Framework, principles, and guidelines. San Francisco, CA: Jossey-Bass. Gay, L. R., Mills, G. E., & Airasian, P. (2006). Educational research: Competencies for analysis and applications (8th ed.). Upper Saddle River, NJ: Pearson Prentice Hall. Gefen, D., & Straub, D. (2005). A practical guide to factorial validity using PLS-Graph: Tutorial and annotated example. Communications of the Association for Information Systems, 16, 91–109. Geisser, S. (1975). The predictive sample reuse method with applications. Journal of the American Statistical Association, 70(350), 320–328. doi:10.2307/2285815 Gersten, R. (2001). Sorting out the roles of research in the improvement of practice. Learning Disabilities Research & Practice, 16(1), 45–50. doi:10.1111/0938-8982.00005 Gibson, S. G., Harris, M. L., & Colaric, S. M. (2008). Technology acceptance in an academic context: Faculty acceptance of online education. Journal of Education for Business, 83, 355–359. doi:10.3200/JOEB.83.6.355-359 Gliner, J. A., & Morgan, G. A. (2000). Research methods in applied settings: An integrated approach to design and analysis. Mahwah, NJ: Erlbaum. Goff, M., & Ackerman, P. L. (1992). Personality–intelligence relations: Assessment of typical intellectual engagement. Journal of Educational Psychology, 84, 537–552. doi:10.1037/0022-0663.84.4.537 Gold, A. H., Malhorta, A., & Segars, A. H. (2001). Knowledge management: An organizational capabilities perspective. Journal of Management Information Systems, 18(1), 185–214. Goldberg, J. S., & Cole, B. R. (2002). Quality management in education: Building excellence and equity in student performance. Quality Management Journal, 9(4), 8–22. Goodson, C. E., Miertschin, S. L., Stewart, B., & Faulkenberry, L. (2009). Online distance education and student learning: Do they measure up? Paper presented at the Annual Conference of the American Society of Engineering Education, Austin, TX.
Compilation of References
Gosling, D., & D’Andrea, V. (2001). Quality development: A new concept for higher education. Quality in Higher Education, 7(1), 7–17. doi:10.1080/13538320120045049 Graham, C., Cagiltay, K., Lim, B. R., Craner, J., & Duffy, T. M. (2001). Seven principles of effective feaching: A practical lens for evaluating online courses. Technology Source. March/April. Graham, C. R. (2006). Blended learning system: Definition, current trends, future directions. In Bonk, C. J., & Graham, C. R. (Eds.), Handbook of blended learning. San Francisco, CA: Pfeiffer. Grandzol, J. R., & Grandzol, C. J. (2006). Best practices for online business education. International Review of Research in Open and Distance Learning, 7(1), 1–17. Gratton-Lavoie, C., & Stanley, D. (2009). Teaching and learning of principles of microeconomics online: An empirical assessment. The Journal of Economic Education, 40(2), 3–25. doi:10.3200/JECE.40.1.003-025 Grzeda, M., & Miller, G. E. (2009). The effectiveness of an online MBA program in meeting mid-career student expectations. Journal of Educators Online, 6(2). Retrieved November 10, 2009, from http://www.thejeo.com/ Archives/Volume6Number2/GrzedaandMillerPaper.pdf Guardado, M., & Shi, L. (2007). ESL students’ experiences of online peer feedback. Computers and Composition, 24, 443–461. doi:10.1016/j.compcom.2007.03.002 Gülbahar, Y. (2009). E-Öğrenme. Ankara: PEGEM. Gunawardena, C. N., Lowe, C. A., & Anderson, T. (1997). Analysis of a global online debate and the development of an interaction model for examining the social construction of knowledge in computer conferencing. Journal of Educational Computing Research, 17(4), 397–431. doi:10.2190/7MQV-X9UJ-C7Q3-NRAG Gunawardena, C. N., & Zittle, F. J. (1997). Social presence as a predictor of satisfaction within a computer mediated conferencing environment. American Journal of Distance Education, 11(3), 8–26. doi:10.1080/08923649709526970 Haddawy, P., & Hien, N. (2006). A decision support system for evaluating international student applications. Computer Science and Information management program. Asian Institute of Technology.
Hafner, W., & Ellis, T. J. (2004). Project-based, asynchronous collaborative learning. Proceedings of the 37th Hawaii International Conference on System Sciences. Piscataway, NJ: IEEE. Hair, J. F., Black, W. C., Babin, B. J., & Anderson, R. E. (2010). Multivariate data analysis (7th ed.). Upper Saddle River, NJ: Prentice Hall. Hair, J., Black, W., Babin, B., Anderson, R., & Tatham, R. (2006). Multivariate data analysis. Upper Saddle River, NJ: Pearson Prentice Hall. Hair, J. F. J., Anderson, R. E., Tatham, R. L., & Black, W. C. (1998). Multivariate data analysis. New Jersey: Prentice Hall. Hall, D. J., Cegielski, C. G., & Wade, J. N. (2006). Theoretical value belief, cognitive ability, and personality as predictors of student performance in object-oriented programming environments. Decision Sciences Journal of Innovative Education, 4(2), 237–257. doi:10.1111/j.15404609.2006.00115.x Hall, M. (2008). Predicting student performance in Webbased distance education courses based on survey instruments measuring personality traits and technical skills. Online Journal of Distance Learning Administration, 11(4). Retrieved from http://www.westga.edu/ ~distance/ ojdla/ fall113/ hall113.html. Hall, A. (2007). Vygotsky goes online: Learning design from a socio-cultural perspective. Learning and Sociocultural Theory: Exploring Modern Vygotskian Perspectives International Workshop 2007. Retrieved June 6, 2009, from http://ro.uow.edu.au/ llrg/vol1/iss1/6 Hamid, A. A. (2002). E-learning: Is it the ‘e’ or the ‘learning’ that matters? The Internet and Higher Education, 4, 311–316. doi:10.1016/S1096-7516(01)00072-0 Hammer, M., & Champy, J. (2001). Reengineering the corporation: A manifesto for business revolution. London, UK: Nicholas Brealey Publishing. Hansen, D. E. (2008). Knowledge transfer in online learning environments. Journal of Marketing Education, 30, 93–105. doi:10.1177/0273475308317702 Hansmann, K. W., & Ringle, C. M. (2004). Smart PLS manual. Förderverein Industrielles Management an der Universität Hamburg e.V.
407
Compilation of References
Hardaway, D., & Will, R. P. (1997). Digital multimedia offers key to educational reform. Communications of the ACM, 40(4), 90–96. doi:10.1145/248448.248463 Hardgrave, B. C., Wilson, R. L., & Eastman, K. (1999). Toward a contingency model for selecting an Information System prototyping strategy. Journal of Management Information Systems, 16(2), 113–136. Harvey, L. (1998). An assessment of past and current approaches to quality in higher education. Australian Journal of Education, 42(3), 237–238. Harvey, L. (2002). Evaluation for what? Teaching in Higher Education, 7(3), 245–263. doi:10.1080/13562510220144761 Harvey, D., Moller, L. A., Huett, J. B., Godshalk, V. M., & Downs, M. (2007). Identifying factors that affect learning community development and performance in asynchronous distance education. In Luppicini, R. (Ed.), Online learning communities (pp. 169–187). Charlotte, NC: Information Age Publishing. Hatfield, S. R., & Gorman, K. L. (2000). Assessment in education--the past, present, and future. Assessment in Business Education - 2000 NBEA Yearbook, 38, 1-10. Hativa, N., & Marincovich, M. (Eds.). (1995). New directions for teaching and learning - disciplinary differences in teaching and learning: Implications for practice. San Francisco, CA: Jossey-Bass. Hayduk, L., Cummings, G. G., Boadu, K., PazderkaRobinson, H., & Boulianne, S. (2007). Testing! Testing! One, two three – testing the theory in structural equation models! Personality and Individual Differences, 42(2), 841–850. doi:10.1016/j.paid.2006.10.001 Hayduk, L. A. (1987). Structural equation modeling with LISREL. Baltimore, MD: Johns Hopkins University Press. Hayduk, L. A. (1996). LISREL issues, debates and strategies. Baltimore, MD: Johns Hopkins University Press. Heckman, R., & Annabi, H. (2006). How the teacher’s role changes in online case study discussions. Journal of Information Systems Education, 17, 141–150.
408
Hendrickson, A., Massey, P. D., & Cronan, T. P. (1993). On the test-retest reliability of perceived usefulness and perceived ease of use scales. Management Information Systems Quarterly, 17(2), 227–230. doi:10.2307/249803 Hennessy, S., Deaney, R., Ruthven, K., & Winterbottom, M. (2007). Pedagocial stragies for using the interactive whiteboard to foster learner participation in school science. Learning, Media and Technology, 32(3), 283–301. doi:10.1080/17439880701511131 Henri, F. (1992).Computer conferencing and content analysis. In A. Kaye (Ed.), Collaborative learning through computer conferencing: The Najaden papers (pp 117-136). Berlin, Germany: Spring-Verlag. Herrera Corona, L., Mendoza Zaragoza, N. E., & Buenabad Arias, M. A. (2009). Educación a distancia: Una perspectiva emocional e interpersonal. Apertura, 9(10), 62–77. Herting, J. R. (1985). Multiple indicator models using LISREL. In Blalock, H. M. (Ed.), Causal models in the social sciences (pp. 263–319). New York, NY: Aldine. Hickcox, L. K. (1995). Learning styles: A survey of adult learning style inventory models. In Sims, R., & Sims, S. (Eds.), The importance of learning styles: Understanding the implications for learning, course design, and education (pp. 25–48). Westport, CT: Greenwood Press. Hill, A. M. (1997). Reconstructionism in technology education. International Journal of Technology and Design Education, 7(1–2), 121–139. doi:10.1023/A:1008856902644 Hillman, D. C. A., Willis, D. J., & Gunawardena, C. N. (1994). Learner-interface interaction in distance education: An extension of contemporary models and strategies for practitioners. American Journal of Distance Education, 8(2), 30–42. doi:10.1080/08923649409526853 Hiltz, S. R. (1995). Teaching in a virtual classroom. International Journal of Educational Telecommunications, 1(2), 185–198. Hirschheim, R. (1985). Information systems epistemology: An historical perspective. In Mumford, E., Hirschheim, R., & Fitzgerald, R. (Eds.), Research methods in Information Systems (pp. 13–18). Amsterdam, The Netherlands: North-Holland.
Compilation of References
Hitt, M. A., Beamish, P. W., Jackson, S. E., & Mathieu, J. E. (2007). Research in theoretical and empirical bridges across levels: Multilevel research in management. Academy of Management Journal, 50, 1385–1399. Hodges, C. B. (2004). Designing to motivate: Motivational techniques to incorporate in e-learning experiences. The Journal of Interactive Online Learning, 2(3), 311–316. Hoffman, D. W. (2002). Internet-based distance learning in higher education. Tech Directions, 62(1), 28–32. Holley, D., & Oliver, M. (2010). Student engagement and blended learning: Portraits of risk. Computers & Education, 54(3), 693–700. doi:10.1016/j.compedu.2009.08.035 Holsapple, C. W., & Lee-Post, A. (2006). Defining, assessing, and promoting e-learning success: An Information Systems perspective. Decision Sciences Journal of Innovative Education, 4(1), 67–85. doi:10.1111/j.15404609.2006.00102.x Honey, P., & Mumford, A. (1992). The manual of learning styles. Maidenhead, UK: Peter Honey. Hong, K.-S. (2002). Relationships between students’ and instructional variables with satisfaction and learning from a Web-based course. The Internet and Higher Education, 5(3), 267–281. doi:10.1016/S1096-7516(02)00105-7 Hooper, D., Coughlan, J., & Mullen, M. R. (2008). Structural equation modeling: Guidelines for determining model fit. The Electronic Journal of Business Research Methods, 6(1), 53–60. Howell, J. M., & Higgins, C. A. (1990). Champions of technological innovations. Administrative Science Quarterly, 35(2), 317–341. doi:10.2307/2393393 Howell, S., & Baker, K. (2006). Good (best) practices for electronically offered degree and certificate programs: A 10-year retrospect. Distance Learning, 3(1), 41–47. Hoyle, R. H. (1995). The structural equation modeling approach: Basic concepts and fundamental issues. In Hoyle, R. H. (Ed.), Structural equation modeling: Concepts, issues, and applications (pp. 1–15). Thousand Oaks, CA: Sage Publications. Hoyle, R. H., & Panter, A. T. (1995). Writing about structural equation models. In Hoyle, R. H. (Ed.), Structural equation modeling: Concepts, issues, and applications (pp. 158–176). Thousand Oaks, CA: Sage Publications.
Hsiu-Yuan, W., & Shwu-Huey, W. (2010). User acceptance of mobile internet based on the unified theory of acceptance and use of technology: Investigating the determinants and gender differences. Social Behavior & Personality: An International Journal, 38(3), 415–426. doi:10.2224/sbp.2010.38.3.415 Hu, L. T., & Bentler, P. M. (1999). Cutoff criteria for fit indexes in covariance structure analysis: Conventional criteria versus new alternatives. Structural Equation Modeling, 6(1), 1–55. doi:10.1080/10705519909540118 Hulland, J. S. (1995). Market orientation and market learning systems: An environment-strategy-performance perspective. (Working Paper Series No. 95-09), The University of Western Ontario. Hulland, J. S. (1999). Use of partial least squares (PLS) in strategic management research: A review of four recent studies. Strategic Management Journal, 20(2), 195–204. doi:10.1002/(SICI)1097-0266(199902)20:2<195::AIDSMJ13>3.0.CO;2-7 Hulland, J. S., Cho, Y. H., & Lam, S. (1996). Use of causal models in marketing research: A review. International Journal of Research in Marketing, 13(2), 181–197. doi:10.1016/0167-8116(96)00002-X Hulland, J. S., & Kleinmuntz, D. N. (1994). Factors influencing the use of internal summary evaluations versus external information in choice. Journal of Behavioral Decision Making, 7(2), 79–102. doi:10.1002/ bdm.3960070202 Hwang, A., & Arbaugh, J. B. (2006). Virtual and traditional feedback-seeking behaviors: Underlying competitive attitudes and consequent grade performance. Decision Sciences Journal of Innovative Education, 4, 1–28. doi:10.1111/j.1540-4609.2006.00099.x Hwang, A., & Arbaugh, J. B. (2009). Seeking feedback in blended learning: Competitive versus cooperative student attitudes and their links to learning outcome. Journal of Computer Assisted Learning, 25, 280–293. doi:10.1111/j.1365-2729.2009.00311.x Hwang, A., & Francesco, A. M. (2010). The influence of individualism-collectivism and power distance on use of feedback channels and consequences for learning. Academy of Management Learning & Education, 9, 243–257.
409
Compilation of References
IDE. (1998). An emerging set of guiding principles and practices for the design and development of distance education. Innovations in Distance Education, Penn State University.
Je Ho, C., & Park, M.-C. (2005). Mobile internet acceptance in Korea. Internet Research, 15(2), 125–140.. doi:10.1108/10662240510590324
IHEP. (2000). Quality on the line: Benchmarks for success in Internet-based distance education. Washington, DC: Institute for Higher Education Policy.
John, O. P., Donahue, E. M., & Kentle, R. L. (1991). The ‘‘Big-Five’’ inventory – versions 4a and 54 (Technical Report). Berkley, CA: University of California, Institute of Personality Assessment and Research.
INEGI. (2009). Usuarios de tecnologías de información, 2001 a 2009. Retrieved July 15, 2010, from http://www. inegi.org.mx /est/contenidos/espanol /soc/sis/sisept/ default.aspx?t =tinf204&s=est&c=5577
Johnson, D. W., Johnson, R. T., & Holubec, E. J. (1994). Cooperative learning in the classroom. Alexandria, VA: Association for Supervision and Curriculum Development.
Inglis, A. (2008). Approaches to the validation of quality frameworks for e-learning. Quality Assurance in Education, 16(4), 347–362. doi:10.1108/09684880810906490
Johnson, D. W., Johnson, R. T., & Stanne, M. B. (2000). Cooperative learning methods: A meta analysis. The Cooperative Learning Center at the University of Minnesota.
Institute for Education Sciences. (2008). What works clearinghouse: Procedures and standards handbook (version 2.0). Retrieved from http://ies.ed.gov/ncee/wwc/ references/ idocviewer/Doc.aspx?docId=19&tocId=11
Johnson, M. D., & Fornell, C. (1987). The nature and methodological implications of the cognitive representation of products. The Journal of Consumer Research, 14(September), 214–228. doi:10.1086/209107
IÖLP. (2010). Anadolu University, Open Education Faculty, Program of English Language Teaching. Retrieved January 10, 2010, from http://iolp.anadolu.edu. tr/genel.htm
Johnson, R. D., Hornik, S., & Salas, E. (2008). An empirical examination of factors contributing to the creation of successful e-learning environments. International Journal of Human-Computer Studies, 66, 356–369. doi:10.1016/j. ijhcs.2007.11.003
Isler, V. (1997). Sanal Universite. Paper presented at Inet-tr’97: Turkiye Internet Konferansi, Ankara, Turkey. ISO. (2009a). ISO 9000 and ISO 14000. Retrieved December 17, 2009, from http://www.iso.org/ iso/ iso_catalogue/ management_standards/ iso_9000_iso_14000.htm ISO. (2009b). ISO 9000 esentials. Retrieved October 30, 2009, from http://www.iso.org/ iso/ iso_catalogue/ management_standards/ iso_9000_iso_14000/ iso_9000_essentials.htm Isodynamic. (2001). E-learning. Retrieved October 13, 2009, from http://www.isodynamic.com /web/pdf/IsoDynamic _elearning_white _paper.pdf Janson, M. A., & Smith, L. D. (1985). Prototyping for systems development: A critical appraisal. Management Information Systems Quarterly, (December): 305–316. doi:10.2307/249231 Jasinski, M. (1998). Teaching and learning styles that facilitate on line learning: Documentation project. Project Report, Douglas Mawson Institute of TAFE.
410
Jonassen, D. H. (1993). Thinking technology: Context is everything. Educational Technology, 31(6), 35–37. Jonassen, D., Davidson, M., Collins, M., Campbell, J., & Haag, B. B. (1995). Constructivism and computer-mediated communication in distance education. American Journal of Distance Education, 9(2), 7–26. doi:10.1080/08923649509526885 Jonassen, D. H., Tessmer, M., & Hannum, W. H. (1999). Task analysis methods for instructional design. Mahwah, NJ: Erlbaum. Jones, R., Moeeni, F., & Ruby, P. (2005). Comparing Web-based content delivery and instructor-led learning in a telecommunications course. Journal of Information Systems Education, 16, 265–271. Jones, S., & Rainie, L. (2002). The Internet goes to college. Washington, D.C.: Pew Internet and American Life Project.
Compilation of References
Jöreskog, K. G., & Sörbom, D. (1982). Recent developments in structural equation modeling. JMR, Journal of Marketing Research, 19(4), 404–416. doi:10.2307/3151714
Kathawala, Y., Abdou, K., & Elmuti, D. S. (2002). The global MBA: A comparative assessment for its future. Journal of European Industrial Training, 26(1), 14–23.. doi:10.1108/03090590210415867
Jöreskog, K. G., & Sörbom, D. (1989). LISREL 7 user’s reference guide. Chicago, IL: SPSS Publications.
Ke, F., & Hoadley, C. (2009). Evaluating online learning communities. Educational Technology Research and Development, 57(4), 487–510. doi:10.1007/s11423-0099120-2
Jöreskog, K. G., & Sörbom, D. (1993). LISREL 8: Structural equation modeling with the SIMPLIS command language. Hillsdale, NJ: Lawrence Erlbaum Associates Publishers. Jöreskog, K. G., & Wold, H. (1982). The ML and PLS techniques for modeling with latent variables: Historical and comparative aspects. In Jöreskog, K. G., & Wold, H. (Eds.), Systems under indirect observation: Causality, structure, prediction (Vol. 1, pp. 263–270). Amsterdam, The Netherlands: North Holland. Julian, S. D., & Ofori-Dankwa, J. C. (2006). Is accreditation good for the strategic decision making of traditional business schools? Academy of Management Learning & Education, 5, 225–233. Jung, I., Choi, S., Lim, C., & Leem, J. (2002). Effects of different types of interaction on learning achievement, satisfaction and participation in Web-based instruction. Innovations in Education and Teaching International, 39(2), 153–162. doi:10.1080/14703290252934603 Kanninen, E. (2009). Learning styles and e-learning. Master of Science Thesis, Master’s Degree Programme in Electrical Engineering. Kanuka, H., Rourke, L., & Laflamme, E. (2007). The influence of instructional methods on the quality of online discussion. British Journal of Educational Technology, 38(2), 260–271. doi:10.1111/j.1467-8535.2006.00620.x Kanuka, H., & Anderson, T. (1998). Online social interchange, discord, and knowledge construction. Journal of Distance Education, 13(1), 57–74. Karaman, S., & Celik, S. (2008). An exploratory study on the perspectives of prospective computer teachers following project-based learning. International Journal of Technology and Design Education, 18(2), 203–215.
Keefe, J. W. (1986). Learning style profile. National Association of Secondary School Principals. Keefe, T. J. (2003). Using technology to enhance a course: The importance of interaction. EDUCAUSE Quarterly, 1, 24–34. Keil, C., Haney, J., & Zoffel, J. (2009). Improvements in student achievement and science process skills using environmental health science problem-based learning curricula. Electronic Journal of Science Education, 13(1), 3–20. Kellogg, D. L., & Smith, M. A. (2009). Student-to-student interaction revisited: A case study of working adult business students in online courses. Decision Sciences Journal of Innovative Education, 7, 433–456. doi:10.1111/j.15404609.2009.00224.x Kelly, H. F., Ponton, M. K., & Rovai, A. P. (2007). A comparison of student evaluations of teaching between online and face-to-face courses. The Internet and Higher Education, 10, 89–101. doi:10.1016/j.iheduc.2007.02.001 Kenny, D. A. (1979). Correlation and causality. New York, NY: Wiley. Kenworthy, J., & Wong, A. (2005). Developing managerial effectiveness: Assessing and comparing the impact of development programmes using a management simulation or a management game. Developments in Business Simulations and Experiential Learning, 32. Kettinger, W. J., & Lee, C. C. (1994). Perceived service quality and user satisfaction with the Information Service function. Decision Sciences, 25(5/6), 737–765. doi:10.1111/j.1540-5915.1994.tb01868.x Khan, B. (1997). Web-based instruction. Englewood Cliffs, NJ: Educational Technology Publications.
411
Compilation of References
Khan, B. H. (2000). Discussion of resources and attributes of the Web for the creation of meaningful learning environments. CyberPyschology & Behavior, 3(1), 17–23. doi:10.1089/109493100316193
Kolb, D. A. (1984). Experiential learning: Experiences as the source of learning and development. Englewood Cliffs, NJ: Prentice Hall.
Khurana, R. (2007). From higher aims to hired hands: The social transformation of American business schools and the unfulfilled promise of management as a profession. Princeton, NJ: Princeton University Press.
Konradt, U., Christophersen, T., & Schaeffer-Kuelzb, U. (2006). Predicting user satisfaction, strain and system usage of employee self-services. International Journal of Human-Computer Studies, 64(11), 1141–1153. doi:10.1016/j.ijhcs.2006.07.001
Kieffer, K. M., Reese, R. J., & Thompson, B. (2001). Statistical techniques employed in AERJ and JCP articles from 1988 to 1997: A methodological review. Journal of Experimental Education, 69, 280–309. doi:10.1080/00220970109599489
Koohang, A., Britz, J., & Seymour, T. (2006). Hybrid/ blended learning: Advantages, challenges, design, and future directions. Retrieved July 20, 2009, from http:// proceedings.informingscience.org/InSITE2006/ProcKooh121.pdf
Kim, E. B., & Schniederjans, M. J. (2004). The role of personality in Web-based distance education courses. Communications of the ACM, 47(3), 95–98. doi:10.1145/971617.971622
Krajcik, J., Czerniak, C., & Berger, C. (1999). Teaching science: A project-based approach. New York, NY: McGraw-Hill College.
Kim, K.-J., Liu, S., & Bonk, C. J. (2005). Online MBA students’ perceptions of online learning: Benefits, challenges and suggestions. The Internet and Higher Education, 8, 335–344. doi:10.1016/j.iheduc.2005.09.005 Kim, N., Smith, M. J., & Maeng, K. (2008). Assessment in online distance education: A comparison of three online programs at a university. Online Journal of Distance Learning Administration, 11(1). Retrieved from http:// www.westga.edu/ ~distance/ ojdla/ spring111/ kim111. html. Kirk, R. E. (1994). Experimental design: Procedures for behavioral sciences (3rd ed.). Belmont, CA: Wadsworth. Kirkpatrick, G. (2010). Online chat facilities as pedagogic tools: A case study. Active Learning in Higher Education, 6(2), 145–159. doi:10.1177/1469787405054239 Kiser, K. (1999). 10 things we know so far about online training. Training (New York, N.Y.), 36(11), 66–74. Klein, H. J., Noe, R. A., & Wang, C. (2006). Motivation to learn and course outcomes: The impact of delivery mode, learning goal orientation, and perceived barriers and enablers. Personnel Psychology, 59, 665–702. doi:10.1111/j.1744-6570.2006.00050.x Kline, R. B. (2005). Principles and practice of structural equation modeling (2nd ed.). New York, NY: The Guilford Press.
412
Koohang, A., & Durante, A. (2003). Learners’ perceptions toward the Web-based distance learning activities/assignments portion of an undergraduate hybrid instructional model. Journal of Information Technology Education, 2, 105–113. Krajcik, J. S., Blumenfeld, P. C., Marx, R. W., & Soloway, E. (1994). A collaborative model for helping middle-grade science teachers learn project-based instruction. The Elementary School Journal, 94(5), 483–497. doi:10.1086/461779 Krug, S. E., & Johns, E. F. (1986). A large scale crossvalidation of second-order personality structure defined by the 16PF. Psychological Reports, 59, 683–693. Kuhn, T. S. (1970). The structure of scientific revolutions (2nd ed.). Chicago, IL: University of Chicago Press. L’Allier, J. J. (1997). Frame of reference: NETg’s map to the products, their structure and core beliefs. NetGwhitepaper. Retrieved from www.netg.com Lalley, J. P., & Gentile, J. R. (2009). Adapting instruction to individuals: Based on the evidence, what should it mean? International Journal of Teaching and Learning in Higher Education, 20(3), 462–475. Land, S. M., & Greene, B. A. (2000). Project-based learning with the World Wide Web: A qualitative study of resource integration. ETR&D, 48(1), 45–68. doi:10.1007/ BF02313485
Compilation of References
Landry, B. J. L., Griffeth, R., & Hartman, S. (2006). Measuring student perceptions of blackboard using the technology acceptance model. Decision Sciences Journal of Innovative Education, 4, 87–99. doi:10.1111/j.15404609.2006.00103.x Larson, P. D. (2002). Interactivity in an electronically delivered marketing course. Journal of Education for Business, 77, 265–245. doi:10.1080/08832320209599675 Larson, R. C., & Murray, M. (2008). Distance learning as a tool for poverty reduction and economic development: A focus on China and Mexico. Journal of Science Education and Technology, 17(2), 175–196. doi:10.1007/ s10956-007-9059-1 Lattuca, L. R., & Stark, J. S. (1994). Will disciplinary perspectives impede curricular reform? The Journal of Higher Education, 65, 401–426. doi:10.2307/2943853
Leef, G. C. (2003). Accreditation is no guarantee of academic quality. The Chronicle of Higher Education, 49(30), B17. Leef, G. C., & Burris, R. D. (2003). Can college accreditation live up to its promises?Washington, DC: American Council of Trustees and Alumni. Lee-Post, A. (2007). Success factors in developing and delivering online courses in operations management. International Journal of Information and Operations Management Education, 2(2), 131–139. doi:10.1504/ IJIOME.2007.015279 Lee-Post, A. (2009). E-learning success model: An Information Systems perspective. Electronic Journal of eLearning, 7(1), 61-70.
Lau, R. (2000). Issues and outlook of e-learning. Business Review (Federal Reserve Bank of Philadelphia), 31, 1–6.
Leidner, D. E., & Jarvenpaa, S. L. (1995). The use of information technology to enhance management school education: A theoretical view. Management Information Systems Quarterly, 19, 265–291. doi:10.2307/249596
Lawrence, G. (1993). People types and tiger Stripes: A practical guide to learning styles (3rd ed.). Gainesville, FL: Center for Applications of Psychological Type.
Levy, Y. (2007). Comparing dropouts and persistence in e-learning courses. Computers & Education, 48, 185–204. doi:10.1016/j.compedu.2004.12.004
Lee, A. S., & Hubona, G. S. (2009). A scientific basis for rigor in Information Systems research. Management Information Systems Quarterly, 33(2), 237–262.
Lewin, K. (1947). Frontiers in group dynamics II. Human Relations, 1, 143–153. doi:10.1177/001872674700100201
Lee, C. I., & Tsai, F. Y. (2004). Internet project-based learning environment: The effects of thinking styles on learning transfer. Journal of Computer Assisted Learning, 20(1), 31–39. doi:10.1111/j.1365-2729.2004.00063.x Lee, D. Y. (2007). The impact of poor performance on risk-taking attitudes: A longitudinal study with a PLS causal modeling approach. Decision Sciences, 28(1), 59–80. doi:10.1111/j.1540-5915.1997.tb01302.x Lee, D., & Kang, S. (2005). Perceived usefulness and outcomes of intranet-bases learning (IBL): Developing asynchronous knowledge systems in organizational settings. Journal of Instructional Psychology, 32(1), 68–73. Lee, H.-J., & Rha, I. (2009). Influence of structure and interaction on student achievement and satisfaction in Web-based distance learning. Journal of Educational Technology & Society, 12(4), 372–382.
Lewis, P. A., & Price, S. (2007). Distance education and the integration of e-learning in a graduate program. Journal of Continuing Education in Nursing, 38(3), 139–143. Li, Q. R., Lau, R. W. H., Shih, T. K., & Li, F. W. B. (2008). Technology supports for distributed and collaborative learning over the Internet. ACM Transactions on Internet Technology, 24. Liaw, S., Huang, H., & Chen, G. (2007). Surveying instructor and learner attitudes toward e-learning. Computers & Education, 49, 1066–1080. doi:10.1016/j. compedu.2006.01.001 Lievens, F., Coetsier, P., De Fruyt, F., & De Maeseneer, J. (2002). Medical students’ personality characteristics and academic performance: A five-factor model perspective. Medical Education, 36, 1050–1056. doi:10.1046/j.13652923.2002.01328.x
413
Compilation of References
Lim, C. P., Lee, S. L., & Richards, C. (2006). Developing interactive learning objects for a computing mathematics module. International Journal on E-Learning, 5(2), 221–244.
Lohdahl, J. B., & Gordon, G. (1972). The structure of scientific fields and the functioning of university graduate departments. American Sociological Review, 37, 57–72. doi:10.2307/2093493
Lin, S. Y., & Overbaugh, R. C. (2007). The effect of student choice of online discussion format on tiered achievement and student satisfaction. Journal of Research on Technology in Education, 39(4), 399–415.
Lohmoller, J. B. (1989). Latent variable path modeling with partial least squares. Heidelberg, Germany: Physica-Verlag, Lohmoller, J. B. (1982). An overview of latent variables path analysis. Paper presented at the Annual Meeting of the American Educational Research Association, New York.
Little, B. B. (2009). Quality assurance for online nursing courses. The Journal of Nursing Education, 48(7), 381–387. doi:10.3928/01484834-20090615-05 Liu, S. Y., Gomez, J., & Cherng-Jyh, Y. (2009). Community college online course retention and final grade: Predictability of social presence. Journal of Interactive Online Learning, 8(2), 165–182. Liu, X., Bonk, C. J., Magjuka, R. J., Lee, S., & Su, B. (2005). Exploring four dimensions of online instructor roles: A program level case study. Journal of Asynchronous Learning Networks, 9(4), 29–48. Liu, X., Magjuka, R. J., Bonk, C. J., & Lee, S.-H. (2007). Does sense of community matter? An examination of participants’ perceptions of building learning communities in online courses. Quarterly Review of Distance Education, 8(1), 9–24. Liu, X., Magjuka, R. J., & Lee, S. (2006). An empirical examination of sense of community and its effects on students’ satisfaction, perceived learning outcome, and learning engagement in online MBA courses. International Journal of Instructional Technology & Distance Learning, 3(7). Retrieved September 15, 2006, from http://www. itdl.org/Journal/Jul_06/article01.htm Livari, J. (2005). An empirical test of the DeLone-McLean model of Information System success. The Data Base for Advances in Information Systems, 36(2), 8–27. Lockee, B., Moore, M., & Burton, J. (2002). Measuring success: Evaluation strategies for distance education. EDUCAUSE Quarterly, 25(1), 20–26. Retrieved from http://www.educause.edu/ ir/ library/ pdf/ eqm0213.pdf. Loehlin, J. C. (2004). Latent variables: An introduction to factor, path, and structural equation modeling. Mahwah, NJ: Lawrence Erlbaum Associates.
414
Lou, Y., & MacGregor, S. K. (2004). Enhancing projectbased learning through online between-group collaboration. Educational Research and Evaluation, 10(4-6), 419–440. doi:10.1080/13803610512331383509 Lounsbury, J. W., & Gibson, L. W. (1998). Personal style inventory: A work-based personality measurement system. Knoxville, TN: Resource Associates. Lounsbury, J. W., Sundstrom, E., Loveland, J. M., & Gibson, L. W. (2003). Intelligence, ‘‘Big-Five’’ personality traits, and work drive as predictors of course grade. Personality and Individual Differences, 35, 1231–1239. doi:10.1016/S0191-8869(02)00330-6 LTM (Learning Type Measurement). (2004) Discover your learning styles graphically. Retrieved on December 29, 2009, from www.learningstyles-online.com Lu, J., Yu, C.-S., & Liu, C. (2003). Learning style, learning patterns, and learning performance in a WebCT-based MIS course. Information & Management, 40, 497–507. doi:10.1016/S0378-7206(02)00064-2 Lu, J., Yu, C.-S., Liu, C., & Yao, J. E. (2003). Technology acceptance model for wireless Internet. Internet Research, 13(3), 206–222. doi:10.1108/10662240310478222 Luan, J. (2002). Data mining and its applications in higher education. In Serban, A., & Luan, J. (Eds.), Knowledge management: Building a competitive advantage for higher education. New directions for Institutional Research, 113. San Francisco, CA: Jossey Bass. Luan, J., Zhai, M., Chen, J., Chow, T., Chang, L., & Zhao, C.-M. (2004). Concepts, myths, and case studies of data mining in higher education. AIR 44th Forum Boston.
Compilation of References
Lui, Y., & Wang, H. (2009). A comparative study on e-learning technologies and products: From the East to the West. Systems Research and Behavioral Science, 26, 191–209. doi:10.1002/sres.959 Ma, Y., Liu, B., Wong, C. K., Yu, P. S., & Lee, S. M. (2000). Targeting the right students using data mining. Proceedings of the Sixth ACM SIGKDD International Conference on Knowledge Discovery and data mining, (pp 457-464). Boston, MA. MacCallum, R. C., & Browne, M. W. (1993). The use of causal indicators in covariance structure models: Some practical issues. Psychological Bulletin, 114(3), 533–541. doi:10.1037/0033-2909.114.3.533 MacCallum, R. C., Browne, M. W., & Sugawara, H. M. (1996). Power analysis and determination of sample size for covariance structure modeling. Psychological Methods, 1, 130–149. doi:10.1037/1082-989X.1.2.130 Macpherson, A., Eliot, M., Harris, I., & Homan, G. (2004). E-learning: Reflections and evaluation of corporate programmes. Human Resource Development International, 295–313. doi:10.1080/13678860310001630638 MacVicar, A., & Vaughan, K. (2004). Employees’ pre-implementation attitudes and perceptions to e-learning: A banking case study analysis. Journal of European Industrial Training, 28(5), 400–413. doi:10.1108/03090590410533080 Madjar, N., Oldham, G. R., & Pratt, M. G. (2002). There’s no place like home? The contributions of work and nonwork creativity support to employees’ creative performance. Academy of Management Journal, (August): 757–767. doi:10.2307/3069309 Major, D. A., Turner, J. E., & Fletcher, T. D. (2006). Linking proactive personality and the Big-Five to motivation to learn and development activity. The Journal of Applied Psychology, 91(4), 927–935. doi:10.1037/00219010.91.4.927 Maki, R. H., & Maki, W. S. (2003). Prediction of learning and satisfaction in Web-based and lecture courses. Journal of Educational Computing Research, 28(3), 197–219. doi:10.2190/DXJU-7HGJ-1RVP-Q5F2
Maki, R. H., Maki, W. S., Patterson, M., & Whittmaker, P. D. (2000). Evaluation of a Web-based introductory psychology course: Learning and satisfaction in online versus lecture courses. Behavior Research Methods, Instruments, & Computers, 32(2), 230–239. doi:10.3758/ BF03207788 Marks, R. B., Sibley, S., & Arbaugh, J. B. (2005). A structural equation model of predictors for effective online learning. Journal of Management Education, 29, 531–563. doi:10.1177/1052562904271199 Marold, K. A., Larsen, G., & Moreno, A. (2000). Webbased learning: Is it working? A comparison of student performance and achievement in Web-based courses and their in-classroom counterparts. Proceedings of the 2000 Information Resources Management Association International Conference on challenges of Information Yechnology management in the 21st century, (pp. 350–353). Anchorage, Alaska, United States. Marshall, C., & Rossmann, G. B. (1999). Designing qualitative research (3rd ed.). Thousand Oaks, CA: Sage Publications. Marshall, S., & Mitchell, G. (2007). Benchmarking for quality improvement: The e-learning maturity model. Paper presented at the ascilite Singapore 2007. Martins, L. L., & Kellermans, F. W. (2004). A model of business school students’ acceptance of a Web-based course management system. Academy of Management Learning & Education, 3, 7–26. Martyn, M. (2003). The hybrid online model: Good practice. EDUCAUSE Quarterly, 26(1), 18–23. Martz, B., Reddy, V. K., & Sangermano, K. (2004). Looking for indicators of success for distance education. In Howard, C., Schenk, K., & Discenza, R. (Eds.), Distance learning and university effectiveness: Changing educational paradigms for online learning (pp. 144–160). Hershey, PA: Information Science Publishing. doi:10.4018/9781591401780.ch007 Marzano, R. J., Pickering, D. J., & Pollock, J. E. (2001). Classroom instruction that works: Research-based strategies for increasing student achievement. Alexandria, VA: Association for Supervision and Curriculum Development.
415
Compilation of References
McBrien, J. L., Jones, P., & Cheng, R. (2009). Virtual spaces: Employing a synchronous online classroom to facilitate student engagement in online learning. International Review of Research in Open and Distance Learning, 10(3), 1–17. McCrae, R. R., & Costa, P. T. Jr. (1985). Updating Norman’s adequate taxonomy: Intelligence and personality dimensions in natural language and in questionnaires. Journal of Personality and Social Psychology, 49, 710–721. doi:10.1037/0022-3514.49.3.710 McCrae, R. R., & Costa, P. T. Jr. (1997). Personality trait structure as a human universal. The American Psychologist, 52, 509–516. doi:10.1037/0003-066X.52.5.509 McDonald, M., Dorn, B., & McDonald, G. (2004). A statistical analysis of student performance in online computer science courses. Proceedings of the 35th SIGCSE Technical Symposium on Computer Science Education, Norfolk, Virginia, (pp. 71-74). McDonald, R. P., & Ho, M.-H. R. (2002). Principles and practice in reporting statistical equation analyses. Psychological Methods, 7(1), 64–82. doi:10.1037/1082989X.7.1.64 McFarland, D., & Hamilton, D. (2006). Factors affecting student performance and satisfaction: Online versus traditional course delivery. Journal of Computer Information Systems, 46(2), 25–32. McKlin, T., Harmon, S. W., Evans, W., & Jone, M. G. (2002). Cognitive presence in Web-based learning: A content analysis of students’ online discussions. American Journal of Distance Education, 15(1), 7–23. Means, B., Toyama, Y., Murphy, R., Bakia, M., & Jones, K. (2009). Evaluation of evidence-based practices in online learning: A meta-analysis and review of online learning studies. Washington, DC: U.S. Department of Education. Meier, C. (2007). Enhancing intercultural understanding using e-learning strategies. South African Journal of Education, 27, 655–671. Melis, E., & Monthienvichienchai, R. (2004). They call it learning style but it’s so much more. World Conference on Elearning in Corporate, Government, Healthcare, and Higher Education (eLearn2004).
416
Merrill, M. D. (2001). Components of instruction toward a theoretical tool of instructional design. Instructional Science, 29, 291–310. doi:10.1023/A:1011943808888 Meyer, K. (2003). Face-to-face versus threaded discussions: The role of time and higher-order thinking. Journal of Asynchronous Learning Networks, 7(3), 55–65. Meyer, K. A. (2002). Quality in distance education: Focus on online learning. Hoboken, NJ: Wiley Periodicals, Inc. Mills, J. E., & Treagust, D. F. (2003). Engineering education – is problem-based or project-based learning the answer? Australian Journal of Engineering Education, 4. Retrieved June 6, 2009, from http://www.aaee.com.au/ journal/ 2003/mills_treagust03.pdf Millson, M. R., & Wilemon, D. (2008). Educational quality correlates of online graduate management education. Journal of Distance Education, 22(3), 1–18. Mingming, J., & Evelyn, T. (1999). A study of students’ perceived learning in a Web-based online environment. In Proceedings of WebNet 99 World Conference on WWW and Internet, Honolulu, Hawaii. Mioduser, D., Nachmias, R., Oren, A., & Lahav, O. (2000). Web-based learning environments: Current pedagogical and technological state. Journal of Research in Computing in Education, 33(1), 55–76. Moallem, M. (2009a). Assessment of complex learning outcomes in online learning environments. In Rogers, P., Berg, G. A., Boettecher, J. V., Howard, C., & Justice, L. (Eds.), Encyclopedia of distance learning (2nd ed.). Hershey, PA: IGI Global. Moallem, M. (2009b). The efficacy of current assessment tools and techniques for assessment of complex and performance-based learning outcomes in online learning. In Rogers, P., Berg, G. A., Boettecher, J. V., Howard, C., & Justice, L. (Eds.), Encyclopedia of distance learning (2nd ed.). Hershey, PA: IGI Global. Monolescu, D., & Schifter, C. (2000). Online focus group: A tool to evaluate online students’ course experience. The Internet and Higher Education, 2, 171–176. doi:10.1016/ S1096-7516(00)00018-X Montano, C. B., & Utter, G. H. (1999). Total quality management in higher education. Quality Progress, 32(8), 52–59.
Compilation of References
Moodle. (2009). Information. Retrieved from http:// www.moodle.org Mook, D. G. (1983). In defense of external invalidity. The American Psychologist, 38(4), 379–387. doi:10.1037/0003-066X.38.4.379 Moore, J. C. (2002). Elements of quality: The Sloan-C framework. Needham, MA: The Sloan Consortium. Moore, J. C. (2005). The Sloan Consortium quality framework and the five pillars. Needham, MA: The Sloan Consortium. Moore, G. C., & Benbasat, I. (1996). Integrating diffusion of innovations and theory of reasoned action models to predict utilization of information technology by endusers. In Kautz, K., & Pries-Hege, J. (Eds.), Diffusion and adoption of information technology (pp. 132–146). London, UK: Chapman and Hall.
Muirhead, B. (2004). Encouraging interactivity in online classes. International Journal of Instructional Technology and Distance Learning, 2(11). Retrieved from http://itdl. org/ Journal/ Jun_04/ article07.htm. Mupinga, D. M., Nora, R. T., & Yaw, D. C. (2006). The learning styles, expectations, and needs of online students. College Teaching, 54(1), 185–189. doi:10.3200/ CTCH.54.1.185-189 Murphy, S. M., & Tyler, S. (2005). The relationship between learning approaches to part-time study of management courses and transfer of learning to the workplace. Educational Psychology, 25, 455–469. doi:10.1080/01443410500045517 Murray, H. G., & Renaud, R. D. (1995). Disciplinary differences in teaching and learning: Implications for practice. New Directions for Teaching and Learning, 64, 31–39. doi:10.1002/tl.37219956406
Moore, M. G. (1989) Editorial: Three types of interaction. American Journal of Distance Education, 3(2), 1–7. Retrieved January 8, 2010, from http://aris.teluq.uquebec. ca/Portals/598/t3_moore1989.pdf
Myer, K. A. (2002). Quality in distance education: Focus on online learning. ASHE-ERIC Higher Education Report, 29(4), 1–121.
Moore, M. G., & Kearsley, G. (2005). Distance education: A systems view (2nd ed.). Belmont, CA: Wadsworth Publishing Company.
Nath, L., & Ralston-Berg, P. (2008). Why “quality matters” matters: What students value. Paper presented at the American Sociological Association 2008 Annual Conference.
Morgeson, F. P., & Hofmann, D. A. (1999). The structure and function of collective constructs: Implications for multilevel research and theory development. Academy of Management Review, 24, 249–265. doi:10.2307/259081 Mount, M. K., & Barrick, M. R. (1998). Five reasons why the `Big-Five’ article has been frequently cited. Personnel Psychology, 51(4), 849–857. doi:10.1111/j.1744-6570.1998.tb00743.x
National Research Council. (2002). Scientific research in education. In Shavelson, R. J., & Towne, L. (Eds.), Committee on scientific principles for educational research. Washington, DC: National Academy Press. Nauman, J. D., & Jenkins, A. M. (1982). Prototyping: The new paradigm for systems development. Management Information Systems Quarterly, (September): 29–44. doi:10.2307/248654
Mount, M. K., Barrick, M. R., Laffitte, L. J., & Callans, M. C. (1999). Administrator’s guide for the personal characteristics inventory. Technical Manual. Libertyville, IL: Wonderlic, Inc.
Navarro, P. (2008). The core curricula of top-ranked U.S. business schools: A study in failure? Academy of Management Learning & Education, 7, 108–123.
Muilenburg, L. Y., & Berge, Z. L. (2005). Student barriers to online learning: A factor analytic study. Distance Education, 26(1), 29–48. doi:10.1080/01587910500081269
Navarro, P., & Shoemaker, J. (2000). Performance and perceptions of distance learners in cyberspace. American Journal of Distance Education, 14(2), 15–35. doi:10.1080/08923640009527052 Navy Integrated Learning Environment (Navy ILE). (2009). Introduction. Retrieved on December 29, 2009, from https://ile-help.nko.navy.mil/ile/
417
Compilation of References
Nemanich, L., Banks, M., & Vera, D. (2009). Enhancing knowledge transfer in classroom versus online settings: The interplay among instructor, student, content, and context. Decision Sciences Journal of Innovative Education, 7, 123–148. doi:10.1111/j.1540-4609.2008.00208.x
Nunes, M. B., & McPherson, M. (2003). Constructivism vs objectivism: Where is the difference for designers of e-learning environments? Proceedings of the 3rd IEEE International Conference on Advanced Learning Technologies.
Neumann, R. (2001). Disciplinary differences and university teaching. Studies in Higher Education, 26, 135–146. doi:10.1080/03075070120052071
Nunnally, J. C., & Bernstein, I. (1994). Psychometric theory (3rd ed.). New York, NY: McGraw Hill.
Neumann, R., Parry, S., & Becher, T. (2002). Teaching and learning in their disciplinary contexts: A conceptual analysis. Studies in Higher Education, 27, 405–417. doi:10.1080/0307507022000011525 Newton, J. (2000). Feeding the beast or improving quality? Academics’ perceptions of quality assurance and quality monitoring. Quality in Higher Education, 6(2), 153–163. doi:10.1080/713692740 NIST. (2009). Malcolm Baldrige National Quality Award: 2009-10 criteria for performance excellence. Gaithersburg, MD: National Institute of Standards and Technology of the United States Department of Commerce. Nixon, J. C., & Helms, M. M. (1997). Developing the virtual classroom: A business school example. Education + Training, 39(9), 349-353. Noller, P., Law, H., & Comrey, A. L. (1987). Cattell, Comrey, and Eysenek personality factors compared: More evidence for the five robust factors? Journal of Personality and Social Psychology, 53, 775–782. doi:10.1037/00223514.53.4.775 Noonan, R. B. (1979). PLS path modelling with latent variables: Analysing school survey data using partial least squares. Stockholm: Institute of International Education, University of Stockholm. Norman, W. T. (1963). Toward an adequate taxonomy of personality attributes: Replicated factor structure in peer nomination personality ratings. Journal of Abnormal and Social Psychology, 66, 574–583. doi:10.1037/h0040291 Northrup, P. T. (2002). Online learners’ preferences for interaction. The Quarterly Review of Distance Education, 3(2), 219–226.
418
O’Connor, M. C., & Paunonen, S. V. (2007). Big-Five personality predictors of post-secondary academic performance. Personality and Individual Differences, 43, 971–990. doi:10.1016/j.paid.2007.03.017 O’Loughlin, M. (1992). Rethinking science education: Beyond Piagetian constructivism toward a sociocultural model of teaching and learning. Journal of Research in Science Teaching, 29(8), 791–820. doi:10.1002/ tea.3660290805 O’Toole, J. (2009). The pluralistic future of management education. In Armstrong, S. J., & Fukami, C. V. (Eds.), The SAGE handbook of management learning, education, and development (pp. 547–558). London, UK: SAGE Publications. Olapiriyakul, K., & Scher, J. M. (2006). A guide to establishing hybrid learning courses: Employing information technology to create a new learning experience, and a case study. The Internet and Higher Education, 9, 287–301. doi:10.1016/j.iheduc.2006.08.001 Oldham, G. R., & Cummings, A. (1996). Employee creativity: Personal and contextual factors at work. Academy of Management Journal, (June): 607–634. doi:10.2307/256657 Oliver, R. (2005). Using a blended learning approach to support problem-based learning with first year students in large undergraduate classes. Frontiers in Artificial Intelligence and Applications, 133, 848-851. Retrieved January 8, 2010, from http://elrond.scam.ecu.edu.au/ oliver/2005/pbl.pdf O’Neil, K., Singh, G., & O’Donoghue, J. (2004). Implementing e-learning programmes for higher education: A review of the literature. Journal of Information Technology Education, 3, 313–323.
Compilation of References
Ong, C. H., & Lai, J. Y. (2006). Gender differences in perceptions and relationships among dominants of elearning acceptance. Computers in Human Behavior, 22(5), 816–829. doi:10.1016/j.chb.2004.03.006 Ortega Amieva, D. C. (2006). La asociación nacional de universidades e instituciones de educación superior y el uso de las tic en educación. In Garrido Noguera, C. (Ed.), El uso de las tecnologías de comunicación e información en la educación superior. Experiencias internacionales (pp. 73–82). Mexico City, Mexico: ELAC. Osguthorpe, R. T., & Graham, C. R. (2003). Blended learning environments, definitions and directions. The Quarterly Review of Distance Education, 4(3), 227–233. O’Sullivan, P. B. (2000). Communication technologies in an educational environment: Lessons from a historical perspective. In Cole, R. A. (Ed.), Issues in Web-based pedagogy: A critical primer (pp. 49–64). Westport, CT: Greenwood press. Ozdemir, Z. D., Altinkemer, K., & Barron, J. M. (2008). Adoption of technology-mediated learning in the U.S. Decision Support Systems, 45, 324–337. doi:10.1016/j. dss.2008.01.001 Paechter, M., Maier, B., & Macher, D. (2010). Students’ expectations of, and experiences in e-learning: Their relation to learning achievements and course satisfaction. Computers & Education, 54(1), 222–229. doi:10.1016/j. compedu.2009.08.005 Palloff, R. M., & Pratt, K. (2005). Collaborating online: Learning together in community. San Francisco, CA: Jossey-Bass. Palloff & Pratt. (1999). Building learning communities in cyberspace: Effective strategies for the online classroom. San Francisco, CA: Jossey-Bass Publishers. Parasuraman, A. (2000). Technology readiness index (TRI): A multiple-item scale to measure readiness to embrace new technologies. Journal of Service Research, 2(4), 307–320. doi:10.1177/109467050024001 Park, Y. J., & Bonk, C. J. (2007). Is life a Breeze?: A case study for promoting synchronous learning in a blended graduate course. Journal of Online Learning and Teaching, 3(3), 307–323.
Parker, J. D., Creque, R. E., Barnhart, D. L., Harris, J. I., Majeski, S. A., & Wood, L. M. (2004). Academic achievement in high school: Does emotional intelligence matter? Personality and Individual Differences, 37(7), 1321–1330. doi:10.1016/j.paid.2004.01.002 Parker, N. K. (2008). The quality dilemma in online education revisited. In Anderson, T. (Ed.), The theory and practice of online learning (pp. 305–342). Edmonton, AB: AU Press, Athabasca University. Parsad, B., & Lewis, L. (2008). Distance education at degree-granting postsecondary institutions: 2006–07 (NCES 2009–044). National Center for Education Statistics, Institute of Education Sciences, U. Washington, DC: S. Department of Education. Parthasurathy, M., & Smith, M. A. 2009. Valuing the institution: An expanded list of factors influencing faculty adoption of online education. Online Journal of Distance Learning Administration, 12(2). Retrieved October 15, 2009, from http://www.westga.edu/~distance/ ojdla/summer122/parthasarathy122.html Paulk, M. C., Curtis, B., Chrissis, M. B., & Weber, C. V. (1993). Capability maturity model, version 1.1. IEEE Software, 10(4), 18–27. doi:10.1109/52.219617 Pawan, F., Paulus, T. M., Yalcin, S., & Chang, C. F. (2003). Online learning: Patterns of engagement and interaction among in-service teachers. Language Learning & Technology, 7(3), 119–140. Peabody, D., & Goldberg, L. R. (1989). Some determinants of factor structures from personality trait descriptors. Journal of Personality and Social Psychology, 57, 552–567. doi:10.1037/0022-3514.57.3.552 Pedhauzur, E. J. (1982). Multiple regression in behavioral research (2nd ed.). New York, NY: Holt, Rinehart and Winston. Peltier, J. W., Drago, W., & Schibrowsky, J. A. (2003). Virtual communities and the assessment of online marketing education. Journal of Marketing Education, 25, 260–276. doi:10.1177/0273475303257762 Peltier, J. W., Schibrowsky, J. A., & Drago, W. (2007). The interdependence of the factors influencing the perceived quality of the online learning experience: A causal model. Journal of Marketing Education, 29, 140–153. doi:10.1177/0273475307302016
419
Compilation of References
Pennsylvania State University. (n.d.). The IPIP-NEO (International Personality Item Pool Representation of the NEO PI-R™). Retrieved November 26, 2007 from http://www.personal.psu.edu/~j5j/IPIP/ Perreault, H., Waldman, L., Alexander, M., & Zhao, J. (2002). Overcoming barriers to successful delivery of distance-learning courses. Journal of Education for Business, 77, 313–318. doi:10.1080/08832320209599681 Perrenet, J. C., Bouhuijs, P. A. J., & Smits, J. G. M. M. (2000). The suitability of problem-based learning for engineering education: Theory and practice. Teaching in Higher Education, 5(3), 345–358. doi:10.1080/713699144 Petrides, K. V., Frederickson, N., & Furnham, A. (2004). The role of trait emotional intelligence in academic performance and deviant behavior at school. Personality and Individual Differences, 36, 277–293. doi:10.1016/ S0191-8869(03)00084-9 Phillips, P., Abraham, C., & Bond, R. (2003). Personality, cognition, and university students’ examination performance. European Journal of Personality, 17, 435–448. doi:10.1002/per.488
Pikkarainen, T., Pikkarainen, K., Karjaluoto, H., & Pahnila, S. (2004). Consumer acceptance of online banking: An extension of the technology acceptance model. Internet Research, 14(3), 224–235.. doi:10.1108/10662240410542652 Pitt, T. J. (1996). The multi-user object oriented environment: A guide with instructional strategies for use in adult education. Unpublished manuscript. Pittenger, D. J. (1993). The utility of the Myers-Briggs type indicator. Review of Educational Research, 63, 467–488. Pittinsky, M., & Chase, B. (2000). Quality on the line: Benchmarks for success in Internet-based distance education. Washington, D.C.: The Institute for Higher Education Policy, National Education Association. Pituch, K. A., & Lee, Y. K. (2006). The influence of system characteristics on e-learning use. Computers & Education, 47, 222–244. doi:10.1016/j.compedu.2004.10.007 Pollacia, L., & McCallister, T. (2009). Using Web 2.0 technologies to meet quality matters (QM) requirements. Journal of Information Systems Education, 20(2), 155–164.
Phipps, R., & Merisotis, J. (2000). Quality on the line: Benchmarks for success in Internet-based distance education. Institute for Higher Education Policy. Retrieved from http://www.eric.ed.gov/ ERICDocs/ data/ ericdocs2sql/ content_storage_01/ 0000019b/ 80/ 16/ 67/ ba.pdf
Popovich, C. J., & Neel, R. E. (2005). Characteristics of distance education programs at accredited business schools. American Journal of Distance Education, 19, 229–240. doi:10.1207/s15389286ajde1904_4
Phoha, V. V. (1999). Can a course be taught entirely via email? Communications of the ACM, 42(9), 29–30. doi:10.1145/315762.315768
Potter, B. N., & Johnston, C. G. (2006). The effect of interactive online learning systems on student learning outcomes in accounting. Journal of Accounting Education, 24, 16–34. doi:10.1016/j.jaccedu.2006.04.003
Picciano, A. G. (2002). Beyond student perceptions: Issues of interaction, presence, and performance in an online course. Journal of Asynchronous Learning Networks, 6(1), 21–40. Piccoli, G., Ahmad, R., & Ives, B. (2001). Web-based virtual learning environments: A research framework and a preliminary assessment of effectiveness in basic IT skills training. Management Information Systems Quarterly, 25(1), 401–426. doi:10.2307/3250989
Pratt, M. G. (2009). From the editors. For the lack of a boilerplate: Tips on writing up (and reviewing) qualitative research. Academy of Management Journal, 52(5), 856–862. Priluck, R. (2004). Web-assisted courses for business education: An examination of two sections of principles of marketing. Journal of Marketing Education, 26(2), 161–173. doi:10.1177/0273475304265635 Pucel, D. J., & Stertz, T. F. (2005). Effectiveness of and student satisfaction with Web-based compared to traditional in-service teacher education courses. Journal of Industrial Teacher Education, 42(1), 7–23.
420
Compilation of References
QM. (2006). Welcome to Quality Matters. Retrieved October 26, 2009, from http://www.qualitymatters.org/ QM. (2009). Quality Matters rubric standards 20082010 edition. Retrieved October 26, 2009, from http:// qminstitute.org/ home/ Public%20Library/ About%20 QM/ RubricStandards2008-2010.pdf QualityMatters. (2010). Quality Matters: Inter-institutional quality assurance in online learning. Retrieved from http://www.qualitymatters.org/ Raaij, E. M., & Schepers, J. J. L. (2008). The acceptance and use of a virtual learning environment in China. Computers & Education, 50, 838–852. doi:10.1016/j. compedu.2006.09.001 Rabak, L., & Cleveland-Innes, M. (2006). Acceptance and resistance to corporate e-learning: A case from the retail sector. Journal of Distance Education, 115–134. Rai, A., Lang, S. S., & Welker, R. B. (2002). Assessing the validity of IS success models: An empirical test and theoretical analysis. Information Systems Research, 13(1), 50–69. doi:10.1287/isre.13.1.50.96 Ramaswami, M., & Bhaskaran, R. (2010). A CHAID based performance prediction model in educational data mining. IJCSI International Journal of Computer Science Issues, 7(1), 10–18. Ranjan, J. (2008). Impact of Information Technology in academia. International Journal of Educational Management, 22(5), 442–455. doi:10.1108/09513540810883177 Ranjan, J., & Malik, K. (2007). Effective educational process: A data mining approach. Vine, 37(4), 502–515. doi:10.1108/03055720710838551 Reigeluth, C. M. (Ed.). (1983). Instructional design theories and models: An overview of their current status. Hillsdale, NJ: Erlbaum. Reiser, R. A., & Gagne, R. M. (1983). Selecting media for instruction. Englewood Cliffs, NJ: Instructional Technology. Reisetter, M., & Boris, G. (2004). What works. Quarterly Review of Distance Education, 5(4), 277–291.
Repman, J., Zinskie, C., & Carlson, R. (2005). Effective use of CMC tools in interactive online learning. Computers in the Schools, 22(1/2), 57–69. doi:10.1300/J025v22n01_06 Reusable Learning. (2009). Retrieved on December 29, 2009, from http://www.reusablelearning.org Richardson, J. C., & Swan, K. (2003). Examining social presence in online courses in relation to students’ perceived learning and satisfaction. Journal of Asynchronous Learning Networks, 7(1), 68–88. Riley, R. W., Fritschler, A. L., & McLaughlin, M. A. (2002). Report to Congress on the distance education demonstration programs. U.S. Department of Education, Office of Postsecondary Education, Policy, Planning, and Innovation, Washington, D.C. Rivard, S., & Huff, S. L. (1988). Factors of success for end-user computing. Communications of the ACM, 31(5), 552–561. doi:10.1145/42411.42418 Robins, S., & Coulter, M. (2009). Management (10th ed.). Upper Saddle River, NJ: Prentice Hall. Robinson, V. M. J. (1993). Current controversies in action research. Public Administration Quarterly, 17(3), 263–290. Robinson, J. L. (2006). Moving beyond adoption: Exploring the determinants of student intention to use technology. Marketing Education Review, 16(2), 79–88. Robler, M. D., & Davis, L. (2008). Predicting success for virtual school students: Putting research-based models into practice. Online Journal of Distance Learning Administration, 11(4). Retrieved from http://www.westga. edu/ ~distance/ ojdla/ winter114/ roblyer114.html. Rodriguez, F. G., & Nash, S. S. (2004). Technology and the adult degree program: The human element. New Directions for Adult and Continuing Education, 103, 73–79. doi:10.1002/ace.150 Roffe, I. (2002). E-learning: Engagement, enhancement and execution. Quality Assurance in Education, 10(1), 40–50. doi:10.1108/09684880210416102
421
Compilation of References
Rohrkemper, M. (1989). Self-regulated learning and academic achievement: A Vygotskian view. In Zimmerman, B. J., & Schunk, D. H. (Eds.), Self-regulated learning and academic achievement: Theory, research, and practice (pp. 143–167). New York, NY: Springer.
Rovai, A. P. (2002). Sense of community, perceived cognitive learning, and persistence in asynchronous learning networks. The Internet and Higher Education, 5(4), 319–332. doi:10.1016/S1096-7516(02)00130-6
Romero, C., & Ventura, S. (2007). Educational data mining: A survey from 1995 to 2005. Expert Systems with Applications, 33, 135–146. doi:10.1016/j.eswa.2006.04.005
Rovai, A. P., & Barnum, K. T. (2003). Online course effectiveness: An analysis of student interactions and perceptions of learning. Journal of Distance Education, 18(1), 57–73.
Rooij, S. W. (2009). Scaffolding project-based learning with the project management body of knowledge (PMBOK). Computers & Education, 52(1), 210–219. doi:10.1016/j.compedu.2008.07.012
Rovai, A. P., & Jordan, H. M. (2004). Blended learning and sense of community: A comparative analysis with traditional and fully online graduate courses. International Review of Research in Open and Distance Learning, 5.
Ross, S. M., Morrison, G. R., & Lowther, D. L. (2005). Using experimental methods in higher education research. Journal of Computing in Higher Education, 16(4), 39–64. doi:10.1007/BF02961474
Rovai, A. P., & Wighting, M. (2005). Feelings of alienation and community among higher education students in a virtual classroom. The Internet and Higher Education, 8, 97–110. doi:10.1016/j.iheduc.2005.03.001
Rossi, J. S. (1985). Tables of effect size for z score tests of differences proportions and correlation coefficients. Educational and Psychological Measurement, 45, 737–743. doi:10.1177/0013164485454004
Rubin, R. S., & Dierdorff, E. C. (2009). How relevant is the MBA? Assessing the alignment of required curricula and required managerial competencies. Academy of Management Learning & Education, 8, 208–224.
Rothstein, M. G., Paunonen, S. V., Rush, J. C., & King, G. A. (1994). Personality and cognitive ability predictors of performance in graduate business school. Journal of Educational Psychology, 86, 516–530. doi:10.1037/00220663.86.4.516
Rubio Oca, J. (2006). La educación superior y la sociedad de la información en méxico. In Garrido Noguera, C. (Ed.), El uso de las tecnologías de comunicación e información en la educación superior. Experiencias internacionales (pp. 10–21). Mexico City, Mexico: ELAC.
Rourke, L., & Anderson, T. (2002). Using peer teams to lead online discussion. Journal of Interactive Media in Education, 1.
Rungtusanatham, M., Ellram, L. M., Siferd, S. P., & Salik, S. (2004). Toward a typology of business education in the Internet age. Decision Sciences Journal of Innovative Education, 2, 101–120. doi:10.1111/j.15404609.2004.00040.x
Rourke, L., & Anderson, T. (2004). Validity in quantitative content analysis. Educational Technology Research and Development, 52(1), 5–18. doi:10.1007/BF02504769 Rourke, L., Anderson, T., Garrison, D. R., & Archer, W. (2001). Assessing social presence in asynchronous, text -based computer conferencing. Journal of Distance Education, 14(3), 51–70. Rovai, A. P. (2000). Online and traditional assessments: What’s the difference? The Internet and Higher Education, 3, 141–151. doi:10.1016/S1096-7516(01)00028-8 Rovai, A. P. (2002). Building sense of community at a distance. International Review of Research in Open and Distance Learning, 3(1). Retrieved November 25, 2009, from http://www.irrodl.org/content/v3.1/rovai.html
422
Saad’e, R. G. (2007). Dimensions of perceived usefulness: Towards enhanced assessment. Decision Sciences Journal of Innovative Education, 5(2), 289–310. doi:10.1111/j.1540-4609.2007.00142.x Saad’e, R. G., & Bahli, B. (2005). The impact of cognitive absorption on perceived usefulness and perceived ease of use in online learning: An extension of the technology acceptance model. Information & Management, 42, 317–327. doi:10.1016/j.im.2003.12.013
Compilation of References
Saade, R. G., Tan, W., & Nebebe, F. (2008). Impact of motivation on intentions in online learning: Canada vs. China. Issues in Informing Science and Information Technology, 5, 137–147.
Schroeder, R. G., Linderman, K., Liedtke, C., & Choo, A. S. (2008). Six Sigma: Definition and underlying theory. Journal of Operations Management, 26(4), 536–554. doi:10.1016/j.jom.2007.06.007
SACS. (2008). The principles of accreditation: Foundations for quality enhancement. Retrieved December 3, 2009, from http://www.sacscoc.org/ pdf/ 2008PrinciplesofAccreditation.pdf
Schrum, L., & Hong, S. (2002). Dimensions and strategies for online success: Voices from experienced educators. Journal of Asynchronous Learning Networks, 6(1).
Sadler, D. R. (1998). Formative assessment: Revisiting the territory. Assessment in Education, 5(1), 77–84. doi:10.1080/0969595980050104 Sahin, I. (2007). Predicting student satisfaction in distance education and learning environments. Turkish Online Journal of Distance Education, 8(2), 113-119. Retrieved September 23, 2009 from http://tojde.anadolu.edu.tr/ tojde26/pdf/article_9.pdf Santhanam, R., Sasidharan, S., & Webster, J. (2008). Using self-regulatory learning to enhance e-learning-based Information Technology training. Information Systems Research, 19(1), 26–47. doi:10.1287/isre.1070.0141 Sargenti, P., Lightfoot, W., & Kehal, M. (2006). Diffusion of knowledge in and through higher education organizations. Issues in Information Systems, 3(2), 3–8. Schank, R. C. (2001). Revolutionizing the traditional classroom course. Communications of the ACM, 44(12), 21–2408. doi:10.1145/501317.501330 Schluep, S., Bettoni, M., & Guttormsen Schär, S. (2005). Modularization and structured markup for Web-based learning content in an academic environment. In Proceedings of the PROLEARN-iClass thematic workshop on Learning Objects in Context, Leuven, Belgium. Schniederjans, M., & Kim, E. B. (2005). Relationship of student undergraduate achievement and personality characteristics in a total Web-based environment: An empirical study. Decision Sciences Journal of Innovative Education, 3(2), 205–221. doi:10.1111/j.1540-4609.2005.00067.x Schreiner, L. A. (2009). Linking student satisfaction with retention. Retrieved January 19, 2010 from https://www. noellevitz.com/NR/rdonlyres/ A22786EF-65FF-4053A15A-CBE145B0C708/ 0/LinkingStudentSatis0809.pdf
Schuell, T. J. (1986). Cognitive conceptions of learning. Review of Educational Research, (Winter): 411–436. Schumacker, R. E., & Lomax, R. G. (2004). A beginner’s guide to structural equation modeling (2nd ed.). Mahwah, NJ: Lawrence Erlbaum Associates, Publishers. Schunk, D. H. (1986). Verbalization and children’s selfregulated learning. Contemporary Educational Psychology, 11, 347–369. doi:10.1016/0361-476X(86)90030-5 SECAT. (1998). Why would you want to use a capability maturity model. Systems Engineering Capability Assessment & Training. Selim, H. M. (2007). Critical success factors for elearning acceptance: Confirmatory factor models. Computers & Education, 49, 396–413. doi:10.1016/j. compedu.2005.09.004 Shah, R., & Ward, P. T. (2007). Defining and developing measures of lean production. Journal of Operations Management, 25(4), 785–805. doi:10.1016/j.jom.2007.01.019 Sharable Content Object Reference Model (SCORM). (2009). Retrieved on December 29, 2009, from http:// www.adlnet.org Shea, P. (2006). A study of students’ sense of learning community in online environments. Journal of Asynchronous Learning Networks, 10(1), 35–44. Shea, P. (2007). Bridges and barriers to teaching online college courses: A study of experienced online faculty in thirty-six colleges. Journal of Asynchronous Learning Networks, 11(2), 73–128. Shea, P., & Bidjerano, T. (2009). Community of inquiry as a theoretical framework to foster epistemic engagement and cognitive presence in online education. Computers & Education, 52(3), 543–553. doi:10.1016/j. compedu.2008.10.007
423
Compilation of References
Shea, P., Li, C. S., & Pickett, A. (2006). A study of teaching presence and student sense of learning community in fully online and Web-enhanced college courses. The Internet and Higher Education, 9(3), 175–190. doi:10.1016/j. iheduc.2006.06.005 Shea, P. J., Pickett, A. M., & Pelz, W. E. (2003). A flow-up investigation of teaching presence in the SUNY learning network. Journal of Asynchronous Learning Networks, 7(2), 61–80. Shea, P. J., Pickett, A. M., & Pelz, W. E. (2004). Enhancing student satisfaction through faculty development: The importance of teaching presence. In J. Bourne & J.C. Moore (Eds), Elements of quality online education: Into the mainstream - volume 5 in the Sloan-C series (pp. 3959). Needham, MA: Sloan Center for Online Education. Shulman, L. S. (1987). Knowledge and teaching: Foundations of the new reform. Harvard Educational Review, 57(1), 1–22. Shulman, L. S. (2005). Signature pedago gies in the professions. Daedalus, 134(3), 52–59. doi:10.1162/0011526054622015 Shyamala, K., & Rajagopalan, S. P. (2006). Data mining model for a better higher educational system. Information Technology Journal, 5(3), 560–564. doi:10.3923/ itj.2006.560.564 Simmering, M. J., Posey, C., & Piccoli, G. (2009). Computer self-efficacy and motivation to learn in a self-directed online course. Decision Sciences Journal of Innovative Education, 7, 99–121. doi:10.1111/j.15404609.2008.00207.x Simmering, M. J., Posey, C., & Piccoli, G. (2009). Computer self-efficacy and motivation to learn in a selfdirected online course. Decision Sciences Journal of Innovative Education, 7(1), 99–121. doi:10.1111/j.15404609.2008.00207.x Singh, H. (2003). Building effective blended learning programmes. Educational Technology, 43, 51–54. Sitzmann, T., Ely, K., Brown, K. G., & Bauer, K. N. (2010). Self-assessment of knowledge: A cognitive learning or affective measure? Academy of Management Learning & Education, 9, 169–191.
424
Sitzmann, T., Kraiger, K., Stewart, D., & Wisher, R. (2006). The comparative effectiveness of Web-based and classroom instruction: A meta-analysis. Personnel Psychology, 59, 623–664. doi:10.1111/j.1744-6570.2006.00049.x Skelton, D. (2009). Blended learning environments: Students report their preferences. In Proceedings of the Twenty Second Annual Conference of the National Advisory Committee on Computing Qualifications. Retrieved January 10, 2010, from http://hyperdisc.unitec.ac.nz/ naccq09/proceedings/pdfs/105-114.pdf Slavin, R. E. (1990). Cooperative learning: Theory, research, and practice. Englewood Cliffs, NJ: Prentice Hall. Smart, J. C., & Ethington, C. A. (1995). Disciplinary and institutional differences in undergraduate education goals: Implications for practice. New Directions for Teaching and Learning, 64, 49–57. doi:10.1002/tl.37219956408 Smart, K., & Cappel, J. (2006). Students’ perceptions of online learning: A comparative study. Journal of Information Technology Education, 5, 201–219. Smeby, J.-C. (1996). Disciplinary differences in university teaching. Studies in Higher Education, 21(1), 69–79. doi :10.1080/03075079612331381467 Smith, G. G., Ferguson, D., & Caris, M. (2003). The Web versus the classroom: Instructor experiences in discussion-based and mathematics-based disciplines. Journal of Educational Computing Research, 29(1), 29–59. doi:10.2190/PEA0-T6N4-PU8D-CFUF Smith, G. G., Heindel, A. J., & Torres-Ayala, A. T. (2008). E-learning commodity or community: Disciplinary differences between online courses. The Internet and Higher Education, 11, 152–159. doi:10.1016/j. iheduc.2008.06.008 Smith, J. B., & Barclay, D. W. (1997). The effects of organizational differences and trust on the effectiveness of selling partner relationships. Journal of Marketing, 61(1), 3–21. doi:10.2307/1252186 Smith, L. J. (2001). Content and delivery: A comparison and contrast of electronic and traditional MBA marketing planning courses. Journal of Marketing Education, 23(1), 35–44. doi:10.1177/0273475301231005
Compilation of References
Smith, P. L., & Dillon, C. L. (1999). Comparing distance learning and classroom learning: Conceptual considerations. American Journal of Distance Education, 13, 107–124. doi:10.1080/08923649909527020 So, H.-J., & Brush, T. A. (2008). Student perceptions of collaborative learning, social presence and satisfaction in a blended learning environment: Relationships and critical factors. Computers & Education, 51(1), 318–336. doi:10.1016/j.compedu.2007.05.009 Soon, K. H., Sook, K. I., Jung, C. W., & Im, K. M. (2000). The effects of Internet-based distance learning in nursing. Computers in Nursing, 18(1), 19–25. Sorensen, J. B., & Stuart, T. E. (2000). Aging, obsolescence, and organizational innovation. Administrative Science Quarterly, (March): 81–112. doi:10.2307/2666980 Spector, P. E. (1992). Summated rating scale construction. Newbury Park, CA: Sage Publications. Stanovich, P. J., & Stanovich, K. E. (2003). Using research and reason in education: How teachers can use scientifically based research to make curricular & instructional decisions. Portsmouth, NH: RMC Research Corporation. Stellwagen, J. B. (2001). A challenge to the learning style advocates. Clearing House (Menasha, Wis.), 74(5), 265–268. doi:10.1080/00098650109599205 Stewart, C. L., & Waight, B. L. (2005). Valuing the adult learner in e-learning: Part one – a conceptual model for corporate settings. Journal of Workplace Learning, 337–345. Stoel, L., & Lee, K. H. (2003). Modeling the effect of experience on student acceptance of Web-based course software. Internet Research: Electronic Networking Applications and Policy, 13, 364–374. doi:10.1108/10662240310501649 Stone, M. (1974). Cross-validatory choice and assessment of statistical predictions. Journal of the Royal Statistical Society. Series A (General), 36(2), 111–133. Strother, J. (2002). An assessment of the effectiveness of e-learning in corporate training programs. International Review of Research in Open and Distance Learning, 3(1).
Sun, P. C., Cheng, H. K., & Finger, G. (2009). Critical functionalities of a successful e-learning system: An analysis from instructors’ cognitive structure toward system usage. Decision Support Systems, 48, 293–302. doi:10.1016/j.dss.2009.08.007 Sun, P. C., Tsai, R. J., Finger, G., Chen, Y. Y., & Yeh, D. (2008). What drives a successful e-learning? An empirical investigation of the critical success factors influencing learner satisfaction. Computers & Education, 50, 1183–1202. doi:10.1016/j.compedu.2006.11.007 Sutter, J. D. (Producer). (2010, March 17) Why teaching is not like making motorcars. CNN Opinion. Retrieved from http://www.cnn.com/ 2010/ OPINION/ 03/ 17/ ted. ken.robinson/ index.html Swan, K., & Shih, L. F. (2005). On the nature and development of social presence in online course discussions. Journal of Asynchronous Learning Networks, 9(3), 115–136. Talavera, L., & Gaudioso, E. (2004). Mining student data to characterize similar behavior groups in unstructured collaboration spaces. Proceedings of Workshop on Artificial Intelligence in Computer Supported Collaborative Learning at European Conference on Artificial Intelligence, Valencia, Spain, (pp. 17-23). Tallent-Runnels, M.-K. (2005). The relationship between problems with technology and graduate students’ evaluations of online teaching. The Internet and Higher Education, 8, 167–174. doi:10.1016/j.iheduc.2005.03.005 Tanaka, J. S., & Huba, G. J. (1984). Confirmatory hierarchical factor analyses of psychological distress measures. Journal of Personality and Social Psychology, 46, 621–635. doi:10.1037/0022-3514.46.3.621 Tanaka, J. S. (1993). Multifaceted conceptions of fit in structural equation models. In Bollen, K. A., & Long, J. S. (Eds.), Testing structural equation models (pp. 10–39). Newbury Park, CA: Sage Publications. Teh, G. P. L. (1999). Assessing student perceptions of Internet-based online learning environment. International Journal of Instructional Media, 26(4), 397–402. Tello, S. F. (2007). An analysis of student persistence in online education. International Journal of Information and Communication Technology Education, 3(3), 7–2. doi:10.4018/jicte.2007070105
425
Compilation of References
Terry, N. (2001). Assessing enrollment and attrition rates for the online MBA. T.H.E. Journal, 28(7), 64–68. Thomas, R., & MacGregor, K. (2005). Online projectbased learning: How collaborative strategies and problem solving processes impact performance. Journal of Interactive Learning Research, 16(1), 83–107. Thomas, J. W. (2000). A review of research on projectbased learning. Retrieved June 16, 2009, from http://www. bobpearlman.org/ BestPractices/ PBL_Research2.pdf Thompson, B. (2007). Effect sizes, confidence intervals and confidence intervals for effect sizes. Psychology in the Schools, 44(5), 423–432. doi:10.1002/pits.20234 Thompson, J. D., Hawkes, R. W., & Avery, R. W. (1969). Truth strategies and university organization. Educational Administration Quarterly, 5(2), 4–25. doi:10.1177/0013131X6900500202 Thompson, R., Higgins, C., & Howell, J. (1994). Influence of experience on personal computer utilization: Testing a conceptual model. Journal of Management Information Systems, 11(1), 167–187. Thorndike, E. L. (1920). Intelligence and its uses. Harper’s Magazine, 140, 227–235. Thurmond, V. A., Wambach, K., Connors, H. R., & Frey, B. B. (2002). Evaluation of student satisfaction: Determining the impact of a Web-based environment by controlling for student characteristics. American Journal of Distance Education, 16(3), 169–189. doi:10.1207/ S15389286AJDE1603_4 Tolsby, H., Nyvang, T., & Dirckinck-Holmfeld, L. (2002). A survey of technologies supporting virtual project based learning. In Proceedings of the Third International Conference on Networked Learning 2002 (pp. 572-580), Lancaster University and Sheffield University. Torres Nabel, L. C. (2006). La educación a distancia en México: ¿Quién y cómo la hace? Apertura, 6(4), 74–89. Training Magazine. (2009). The 2009 training industry report - executive summary. Retrieved from http// ww.training.com
426
Trank, C. Q., & Rynes, S. L. (2003). Who moved our cheese? Reclaiming professionalism in business education. Academy of Management Learning & Education, 2, 189–205. Trapmann, S., Hell, B., Hirn, J. W., & Schuler, H. (2007). Meta-analysis of the relationship between the Big-Five and academic success at university. The Journal of Psychology, 215(2), 132–151. Trochim, W. M. (2006). Likert scaling. Retrieved May 14, 2009, from http://www.socialresearchmethods.net/ kb/ scallik.php Trochim, W. M. (2006). Deduction & induction. Retrieved June 25, 2009, from http://www.socialresearchmethods. net/ kb/ dedind.php Tu, C. H. (2001). How Chinese perceive social presence: An examination of interaction in an online learning environment. Educational Media International, 38(1), 45–60. doi:10.1080/09523980010021235 Tu, C. H., & McIsaac, M. (2002). The relationship of social presence and interaction in online classes. American Journal of Distance Education, 16(3), 131–150. doi:10.1207/S15389286AJDE1603_2 Tung, F., & Deng, Y. (2007). Increasing social presence of social actors in e-learning environments: Effects of dynamic and static emoticons on children. Displays, 28, 174–180. doi:10.1016/j.displa.2007.06.005 Türel, Y. K., & Gürol, M. (2005). A new approach for elearning: Rapid e-learning. Proceeding of 5th International Educational Technology Conference, Sakarya, Turkey. Twigg, C. (2003). Improving learning and reducing cost: New models for online learning. EDUCAUSE Review, 38, 28–38. U.S. Congress. (2001). No Child Left Behind Act of 2001. (Pub.L.No. 107-110,115 Stat. 1425). USDOE. (2006). Evidence of quality in distance education program drawn from interviews with the accreditation community. Retrieved October 26, 2009, from http:// www.ysu.edu/ accreditation/ Resources/ AccreditationEvidence-of-Quality-in-DE-Programs.pdf
Compilation of References
Vacha-Haase, T., & Thompson, B. (2004). How to estimate and interpret various effect sizes. Journal of Counseling Psychology, 51, 473–481. doi:10.1037/00220167.51.4.473 van der Rhee, B., Verma, R., Plaschka, G. R., & Kickul, J. R. (2007). Technology readiness, learning goals, and e-learning: Searching for synergy. Decision Sciences Journal of Innovative Education, 5(1), 127–149. doi:10.1111/j.1540-4609.2007.00130.x Van der Spuy, M., & Wöcke, A. (2006). The effectiveness of technology based (interactive) distance learning methods in a large South African financial. South African Journal of Business Management, 34(2). Van Patten, J., Chao, C., & Riegeluth, C. (1986). A review of strategies for sequencing and synthesizing instruction. Review of Educational Research, 56, 437–471. doi:10.3102/00346543056004437 Vaughan, N., & Garrison, D. R. (2005). Creating cognitive presence in a blended faculty development community. The Internet and Higher Education, 8(1), 1–12. doi:10.1016/j. iheduc.2004.11.001 Vaughan, N. (2007). Perspectives on blended learning in higher education. International Journal on E-Learning, 6(1), 81-94. Chesapeake, VA: AACE. Retrieved January 15, 2010, from http://www.editlib.org/p/6310 Venkatesh, V., & Davis, F. D. (2000). A theoretical extension of the technology acceptance model: Four longitudinal field studies. Management Science, 46(2), 186–204. doi:10.1287/mnsc.46.2.186.11926 Venkatesh, V., & Morris, M. G. (2000). Why don’t men ever stop to ask for directions? Gender, social influence, and their role in technology acceptance and usage behavior. Management Information Systems Quarterly, 24(1), 115–139. doi:10.2307/3250981 Venkatesh, V., Morris, M. G., Davis, G. B., & Davis, F. D. (2003). User acceptance of Information Technology: Toward a unified view. Management Information Systems Quarterly, 27(3), 425–478. Volery, T., & Lord, D. (2000). Critical success factors in online education. International Journal of Educational Management, 14(5), 216–223. doi:10.1108/09513540010344731
Vonderwell, S., & Savery, J. (2004). Online learning: Student roles and readiness. The Turkish Online Journal of Educational Technology, 3(3), 38–42. Waiyamai, K. (2004). Improving quality of graduate students by data mining. Faculty of Engineering, Kasetsart University, Frontiers of ICT Research International Symposium. Wan, Z., Fang, Y., & Neufeld, D. J. (2007). The role of information technology in technology-mediated learning: A review of the past for the future. Journal of Information Systems Education, 18, 183–192. Wan, Z., Wang, Y., & Haggerty, N. (2008). Why people benefit from e-learning differently: The effects of psychological processes on e-learning outcomes. Information & Management, 45(8), 513–521. doi:10.1016/j. im.2008.08.003 Wang, M., Pool, M., Harris, B., & Wangemann, P. (2001). Promoting online collaborative learning experiences for teenagers. Educational Media International, 38(4), 203–215. doi:10.1080/09523980110105079 Wang, J., Fong, Y. C., & Alwis, W. A. M. (2005). Developing professionalism in engineering students using problem based learning. Proceedings of the 2005 Regional Conference on Engineering Education (pp. 1- 9). Johor, Malaysia. Wang, Q. (2006). Quality assurance - best practices for assessing online programs. International Journal on ELearning, 5(2), 265. Wang, Y. (2003). Assessment of learner satisfaction with asynchronous electronic learning systems. Information & Management, 41, 75–86. doi:10.1016/S03787206(03)00028-4 Wang, Y., Wang, H., & Shee, D. Y. (2007). Measuring e-learning systems success in an organizational context: Scale development and validation. Computers in Human Behavior, 23, 1792–1808. doi:10.1016/j.chb.2005.10.006 Waschull, S. B. (2005). Predicting success in online psychology courses: Self-discipline and motivation. Teaching of Psychology, 32(3), 3. doi:10.1207/ s15328023top3203_11
427
Compilation of References
Watkins, D. (1998). Assessing approaches to learning: A cross-cultural perspective. In Dart, B., & Boulton-Lewis, G. (Eds.), Teaching and learning in higher education. Melbourne, Australia: The Australian Council for Educational Research. WCET. (2001). Best practices for electronically offered degree and certificate programs. Retrieved October 26, 2009, from http://wcet.info/ resources/ accreditation/ Accrediting%20-%20Best%20Practices.pdf Webb, H. W., Gill, G., & Poe, G. (2005). Teaching with the case method online: Pure versus hybrid approaches. Decision Sciences Journal of Innovative Education, 3, 223–250. doi:10.1111/j.1540-4609.2005.00068.x Web-based Education Commission. (2000). The power of the Internet for learning: Moving from promise to practice. Washington, D.C. Retrieved from http://www. hpcnet.org/ webcommission Weber, J. M., & Lennon, R. (2007). Multi-course comparison of traditional versus Web-based course delivery systems. Journal of Educators Online, 4(2), 1–19. Welsh, E. T., Wanberg, C. R., Brown, K. G., & Simmering, M. J. (2003). E-learning: Emerging uses, empirical results and future directions. International Journal of Training, 245-258. Westerman, J. W., & Yamamura, J. H. (2007). Generational preferences for work environment fit: Effects on employee outcomes. Career Development International, 12(2), 150–161. doi:10.1108/13620430710733631 Wetzel, C. D., Radtke, P. H., & Stern, H. W. (1994). Instructional effectiveness of video media. Hillsdale, NJ: Lawrence Erlbaum Associates. Whetten, D. A. (2008). Introducing AMLEs educational research databases. Academy of Management Learning & Education, 7, 139–143. Whetten, D. A., Johnson, T. D., & Sorenson, D. L. (2009). Learning-centered course design. In Armstrong, S. J., & Fukami, C. V. (Eds.), The SAGE handbook of management learning, education, and development (pp. 254–270). London, UK: Sage. Whitelock, D., & Jeffs, A. (2003). [Editorial]. Journal of Educational Media, 28(2-3), 99–100.
428
Wilkinson, L. (1999). Statistical methods in psychology journals: Guidelines and explanations. The American Psychologist, 54(9), 594–604. doi:10.1037/0003066X.54.8.594 Williams, E. A., Duray, R., & Reddy, V. (2006). Teamwork orientation, group cohesiveness, and student learning: A study of the use of teams in online distance education. Journal of Management Education, 30, 592–616. doi:10.1177/1052562905276740 Williams, P. W. (2009). Assessing mobile learning effectiveness and acceptance. ProQuest Information & Learning, 69. Retrieved from http://search.ebscohost.com /login.aspx?direct =true&db=psyh &AN=2009-99090398&loginpage= CustLogin.asp?custid =s5776608&site= ehost-live Wilson, T. (2008). New ways of mediating learning: Investigating the implications of adopting open educational resources for tertiary education at an institution in the United Kingdom as compared to one in South Africa. International Review of Research in Open and Distance Learning, 9(1). Wimba. (2009a). Wimba for higher education. Retrieved from http://www.wimba.com/ solutions/ higher-education/ wimba_classroom_for_higher_education Wimba. (2009b). Bring class to life. Retrieved from http:// www.wimba.com/products/wimba_classroom Witten, I., & Frank, E. (2005). Data mining: Practical machine learning tools and techniques. Elsevier. Wold, H. (1980). Model construction and evaluation when theoretical knowledge is scarce: Theory and application of partial least squares. In Kmenta, J., & Ramsey, J. B. (Eds.), Evaluation of econometric models (pp. 47–74). New York, NY: Academic Press. Wold, H. (1982). Soft modeling: The basic design and some extensions. In Jöreskog, K. G., & Wold, H. (Eds.), Systems under indirect observation (pp. 1–54). Amsterdam, The Netherlands: North-Holland. Wold, H. (1985). Systems analysis by partial least squares. In Nijkamp, P., Leitner, L., & Wrigley, N. (Eds.), Measuring the unmeasurable (pp. 221–251). Dordrecht, The Netherlands: Marinus Nijhoff.
Compilation of References
Wold, H. (1989). Introduction to the second generation of multivariate analysis. In Wold, H. (Ed.), Theoretical empiricism (pp. 7–11). New York, NY: Paragon House. Wolfe, R. N., & Johnson, S. D. (1995). Personality as a predictor of college performance. Educational and Psychological Measurement, 55, 177–185. doi:10.1177/0013164495055002002 Wu, J.-H., Tennyson, R. D., & Hsia, T.-L. (2010). A study of student satisfaction in a blended e-learning system environment. Computers & Education..doi:10.1016/j. compedu.2009.12.012 Xiaojing, L., Magjuka, R. J., Bonk, C. J., & Seung-hee, L. (2007). Does sense of community matter? Quarterly Review of Distance Education, 8(1), 9–24. Yang, Y., & Cornelius, L. F. (2004). Students’ perceptions towards the quality of online education: A qualitative approach. Paper presented at the Association for Educational Communications and Technology 27th Conference. Yang, Y., & Cornelious, L. F. (2005). Preparing instructors for quality online instruction. Online Journal of Distance Learning Administration, 8(1). Yarusso, L. (1992). Constructivism vs. objectivism. Performance and Instruction Journal, April, 7–9. Yee, H. T. K., Luan, W. S., Ahmad, F. M. A., & Mahmud, R. A. (2009). Review of the literature: Determinants of online learning among students. European Journal of Soil Science, 8(2), 246–252. Yin, R. (1994). Case study research: Design and methods (2nd ed.). Beverly Hills, CA: Sage Publishing. Young, A., & Norgard, C. (2006). Assessing the quality of online courses from the students’ perspective. The Internet and Higher Education, 9, 107–115. doi:10.1016/j. iheduc.2006.03.001 Young, J. R. (2002). Hybrid teaching seeks to end the divide between traditional and online instruction. The Chronicle of Higher Education, 48(28), A33–A34.
Yukselturk, E., & Top, E. (2005-2006). Reconsidering online course discussions: A case study. Journal of Educational Technology Systems, 34(3), 341–367. doi:10.2190/6GQ8-P7TX-VGMR-4NR4 Yushau, B. (2006). Computer attitude, use, experience, software familiarity and perceived pedagogical useful of mathematics professors. Eurasia Journal of Mathematics, Science & Technology Education, 2(3). Zapalska, A., Shao, D., & Shao, L. (2003). Student learning via WebCT course instruction in undergraduate-based business education. Teaching Online in Higher Education (Online). Conference. Zhang, D., Zhao, J. L., Zhou, L., & Nunamaker, J. F. (2004). Can e-learning replace classroom learning? Communications of the ACM, 47(5), 74–79. doi:10.1145/986213.986216 Zimmerman, B. J. (1986). Development of self-regulated learning: Which are the key subprocesses? Contemporary Educational Psychology, 16, 307–313. doi:10.1016/0361476X(86)90027-5 Zimmerman, B. J. (1994). Dimensions of academic selfregulation: A conceptual framework for education. In Schunk, D. H., & Zimmerman, B. J. (Eds.), Self-regulation of learning and performance: Issues and educational applications (pp. 3–2l). Hillsdale, NJ: Erlbaum. Zimmerman, B. J., & Martinez-Pons, M. (1986). Development of a structured interview for assessing student use of self-regulated learning strategies. American Educational Research Journal, 23, 614–628. Zwanenberg, N. V., Wilkinson, L. J., & Anderson, A. (2000). Felder and Silverman’s index of learning styles and Honey and Mumford’s learning styles questionnaire: How do they compare and do they predict academic performance? Educational Psychology, 20(3), 365–381. doi:10.1080/713663743
429
430
About the Contributors
Sean B. Eom is a Professor of Management Information Systems (MIS) and had been appointed as a Copper dome Faculty Fellow in Research at the Harrison College of Business of Southeast Missouri State University during the academic years 1994-1996 and 1998-2000. He received his Ph.D. Degree in Management Science from the University of Nebraska - Lincoln in 1985. His other degrees are from the University of South Carolina at Columbia (M.S. in international business), Seoul National University (MBA in International Management), and Korea University (B.A.). His research areas include decision support systems (DSS), expert systems, and inter-organizational Information Systems management, and e-learning systems. He is the author/editor of nine books including The Development of Decision Support Systems Research: A Bibliometrical Approach, Author Cocitation Analysis: Quantitative Methods for Mapping the Intellectual Structure of an Academic Discipline. and Inter-Organizational Information Systems in the Internet Age. He published more than 55 refereed journal articles and 80 articles in encyclopedia, book chapters, and conference proceedings. The Decision Sciences Institute honored him with the 2006 Best Paper in Decision Sciences Journal of Innovative Education (DSJIE). Google Scholar Database citation statistics indicate that his coauthored DSJIE article in 2006 is the most frequently cited article among all of the articles in DSJIE as of September 2010. Moreover, that article is one of the most frequently downloaded articles in DSJIE. J. B. (Ben) Arbaugh is a Professor of Strategy and Project Management at the University of Wisconsin Oshkosh. He received his Ph.D. in Business Strategy from the Ohio State University. Ben currently is Editor of Academy of Management Learning & Education. He was the 2009 GMAC Management Education Research Institute Faculty Research Fellow, and is a past chair of the Management Education and Development Division of the Academy of Management. Ben’s online teaching research has won best article awards from the Journal of Management Education and the Decision Sciences Journal of Innovative Education. His other research interests are in graduate management education and the intersection between spirituality and strategic management research. Ben also sits on the editorial boards of numerous journals, including the International Journal of Management Reviews, the Decision Sciences Journal of Innovative Education, Management Learning, The Internet and Higher Education, and the Journal of Management Education. *** Zehra Akyol is a researcher with a PhD degree in the field of Instructional Technology. She conducted her doctoral research at the University of Calgary as a visiting student during her doctorate education at Middle East Technical University. Her research was about the community development in online and
About the Contributors
blended learning environments and the factors affecting the development of communities of inquiry in these learning environments. Recently she is working on cognitive and emotional aspects of online learning. Bünyamin Atici is the Head of Computer Education and Instructional Technology department at Firat University. Dr. Atici has over ten years of management and teaching experience in the Turkey. He has assumed the responsibilities of teaching and researches at the university level, international project management, and consulting. He has taught several courses related with pedagogy and computer at the university level. At the present, he is teaching at the Firat University, Turkey. In 2005, he managed one grant related with cyber terror for European Union Leonardo da Vinci Program. He delivered seminars about EU Education and Training Programmes and Actions in local level: Project Management in Germany, France, Greece, FYR of Macedonia, Turkey; developed Firat University Virtual Learning Environment. As a researcher/ consultant in the field of instructional technology, he has a specialist in designing and implementing the whole kinds of online learning. He has been working on development of methods for e-learning systems using different methods including social knowledge construction. Nicholas J. Ashill holds the position of Professor of Marketing at the American University of Sharjah in the United Arab Emirates. He previously held the position of Professor of Marketing at Victoria University of Wellington in New Zealand. Professor Ashill has published extensively in international marketing journals including the Journal of Retailing, Journal of Management, European Journal of Marketing and the Journal of Strategic Marketing. He research interests include e-learning, marketing Information Systems, service quality, uncertainty, and customer satisfaction. He also holds membership of a number of editorial boards including the European Journal of Marketing, Marketing Intelligence and Planning and the Journal of Asia Pacific Business. Art Bangert is an Associate Professor in the Department of Education at Montana State University where he teachers course in educational statistics, research methods, and assessment. His research interests are focused on designing, teaching, and evaluating online learning environments. Meltem Huri Baturay is an instructor at the Institute of Informatics, Gazi University, Turkey. Currently, she works as the Academic Affairs Coordinator of Distance Education at Gazi University. She has a Ph.D. (2007) in Computer Education and Instructional Technology at Middle East Technical University, Turkey. She has particular interest in student perceptions and achievement in e-learning environments and Web-based language learning. Constanţa-Nicoleta Bodea is a professor of artificial intelligence and project management at the Academy of Economic Studies (AES), Bucharest, Romania. She directs a Masters Program in Project Management, and is the head of the Economic Research Department at AES. She managed more than 20 R&D and IT projects in the last ten years, contributor and author of 11 books and more than 50 papers on project management, Information Systems, and artificial intelligence, being honored by IPMA with the Outstanding Research Contributions (2007).
431
About the Contributors
Vasile Bodea works at the Academy of Economic Studies, the Economic Research Department. He has a Ph.D degree in economic cybernetics and statistics, the topic of his doctoral thesis being knowledge management. He is co-author of three books. He published more than 20 papers on knowledge management. Craig Robertson Cadenhead was an honours student in Information Systems at the Department of Information Systems, University of Cape Town. He did his research in South Africa as the Corporate Information Architect for Old Mutual South Africa for “Factors Influencing User Acceptance of Internet Based E-learning in Corporate South Arica”. His professional IT experience is in computer programming and enterprise architecture. Maria-Iuliana Dascălu has a Master Degree in Project Management from the Academy of Economic Studies (AES), Bucharest, Romania (2008), and a Bachelors Degree in Computer Science from the Alexandru-Ioan Cuza University, Iasi, Romania (2006). She is a PhD candidate in Economic Informatics at the Academy of Economic Studies, combining her work experience as a programmer with numerous research activities. Her research relates to computer-assisted testing with applications in e-learning environments for project management. David Xin Ding is an Assistant Professor in the Department of Information and Logistics Technology at the University of Houston. He received his Ph.D. in Operations Management from the David Eccles School of Business at University of Utah. Dr. Ding is interested in the design and effectiveness of complex service systems with specific emphasis on system and service quality, experience design, value co-creation, and operational efficiency. His research has been published or in press with Journal of Business Research, Journal of Service Research, and Journal of Service Management. He has been awarded several grants to explore the use of video technology in classroom settings. He is a member of DSI, INFORMS, and POMs. Ana Sanz Esteban received an Engineering degree in Computer Science at Carlos III University of Madrid, and Msc in Computer Science and Technology by the same university. She is a PhD student and associate professor in the Computer Science Department at the Carlos III University of Madrid. Her research on software engineering is focused on software process improvement, especially test process improvement, knowledge transferring, electronic process guide and learning contents, and how to improve e-learning courses. She is a teacher in an online Master given by Carlos III University. D. Randy Garrison is the Director of the Teaching & Learning Centre and a Professor in the Faculty of Education at the University of Calgary. He was formerly a Professor and Dean at the Faculty of Extension at the University of Alberta. Dr. Garrison has published extensively on teaching and learning in higher, adult and distance education contexts. Randy also manages and conducts research on innovative teaching and learning approaches and designs. His most recent books are: “E-Learning in the 21st Century” (2003) and “Blended Learning in Higher Education” (2008).
432
About the Contributors
Clyde W. Holsapple, a Fellow of the Decision Sciences Institute, holds the Rosenthal Endowed Chair in the University of Kentucky’s Gatton College of Business. He has authored about 150 research articles in journals including Operations Research, Journal of Operations Management, Organization Science, Decision Sciences, Decision Support Systems, Group Decision and Negotiation, Communications of the ACM, Journal of American Society for Information Science and Technology, Journal of Knowledge Management, Knowledge Management Research and Practice, Knowledge and Process Management, and IEEE Transactions on Systems, Man and Cybernetics. His books include Handbook on Knowledge Management, Foundations of Decision Support Systems, and Handbook on Decision Support Systems. He has served as Editor-in-Chief of the Journal of Organizational Computing and Electronic Commerce; Senior Editor of Information Systems Research; Area Editor of Decision Support Systems and the INFORMS Journal on Computing; Associate Editor of Management Science and Decision Sciences. He has chaired over 25 doctoral dissertations. Alvin Hwang is Professor of Management and Chair of International Business Programs at Pace University. His research covers management development, technology and learning, cross cultural differences, leadership, and organizational learning with publications in the Academy of Management Learning and Education, Journal of Cross-Cultural Psychology, Human Relations, and others. Recent recognition has included the 2006 Best Paper in Decision Sciences Journal of Innovative Education, 2005 and 2006 Best Paper Proceedings in the Academy of Management Conference, and numerous Outstanding Reviewer Awards from the Management Education Division of the Academy of Management from 2004 to 2010. María del Carmen Jiménez-Munguia is a Professor of Accounting at the Universidad de las Américas Puebla in México. She is a graduate student of the program of Science, Engineering and Technology Education at the Universidad de las Américas Puebla. Her doctoral research focuses on how the use of technology might influence the performance in class of students and their learning outcomes. Eyong B. Kim is an Associate Professor of Management Information Systems at the Barney School of Business at University of Hartford. His research interests include information security, e-commerce, and online education. He has published papers in Communications of the ACM, Decision Sciences, Decision Sciences Journal of Innovative Education, Journal of Operational Research Society, Omega, and others. Jamison V. Kovach is an Assistant Professor in the Department of Information and Logistics Technology at the University of Houston. She received her Ph.D. in Industrial Engineering from Clemson University. Her industrial experience includes several years as a product and process improvement engineer in the U.S. textile industry, and she is certified in Six Sigma Black Belt training. Her research interests include robust design, experimental design, and the application of quality improvement and management methods for organizational problem solving. She has been awarded several grants to explore innovative methods of instruction, most recently with podcasts and video productions. She has recently published articles in Quality Engineering, Quality Progress, and Quality Advances in Higher Education and is a member of ASQ, DSI, IIE, and POMS.
433
About the Contributors
Anita Lee-Post is an Associate Professor of the Decision Science and Information Systems area at the University of Kentucky. She received her Ph.D. in Business Administration from the University of Iowa in 1990. Her research interests include e-learning, Web mining, knowledge management, decision support systems, artificial intelligence, computer integrated manufacturing, and group technology. She has published extensively in journals such as OMEGA, Decision Sciences: Journal of Innovative Education, Computers and Industrial Engineering, International Journal of Production Research, AI Magazine, Expert Systems, Expert Systems with Applications, IEEE Expert, Journal of the Operational Research Society, and OM Review. She is the author of Knowledge-based FMS Scheduling: An Artificial Intelligence Perspective. She serves on the editorial review boards of Production Planning and Control, International Journal of Business Information Systems, International Journal of Data Mining, Modeling and Management, and Journal of Managerial Issues. Luis Felipe Luna-Reyes is a Professor of Business at the Universidad de las Américas Puebla in México. He holds a Ph.D. in Information Science from the University at Albany. Luna-Reyes is also a member of the Mexican National Research System. His research focuses on electronic government and on modeling collaboration processes in the development of Information Technologies across functional and organizational boundaries. He is also interested in the use of technology as a tool for teaching and learning. Florence Martin is an Assistant Professor in the Instructional Technology program at the University of North Carolina, Wilmington. She is interested in researching on technology tools that improve learning and performance (learning management systems, virtual classrooms, Web 2.0 tools etc). She received her Doctorate and Master’s in Educational Technology from Arizona State University and Bachelor’s degree in Electronics and Communication Engineering from Bharathiyar University, India. Prior to her current position, she has worked on instructional design projects for Maricopa Community College, University of Phoenix, Intel, Cisco Learning Institute, and Arizona State University. Stacey McCroskey joined academia after a successful corporate career. She has worked in the oil & gas, financial services, and software industries in Houston, TX, Ann Arbor, MI, and London, England. She has managed software development, project management, quality assurance, technical writing, data analysis, operations, and technical support teams both domestic and international. She received her PhD in Organization and Management from Capella University and has the PMP (Project Management Professional) certification. She currently works as an adjunct instructor, online and face-to-face, for several universities teaching organizational behavior and project management. Susan L. Miertschin is an Associate Professor in Computer Information Systems at the University of Houston. She began her career in higher education teaching applied mathematics for engineering technology students. Her long research interest in the application of information and communication technologies to instruction plus a demonstrated depth of knowledge of computer systems caused her to change her teaching focus to computer Information Systems in 2000. Recently, she has completed graduate course work in the area of Medical Informatics in order to deepen and broaden her knowledge of a key application domain for computer Information Systems. She is active in ASEE and the Information Technology Education SIG of ACM. She has taught both online and hybrid courses and is interested in enhancing the quality of online learning experiences. 434
About the Contributors
Radu - Ioan Mogoş is a PhD student at the Faculty of Cybernetics, Statistics and Economic IT, Bucharest Academy of Economic Studies. He is working as IT analyst at the IT Department. He is a member of the Romania Project Management Association. Research domains include artificial intelligence and project management. He is author of one book and is project member in several projects. Abdou Ndoye is the Assessment Director for the Watson School of Education. He also teaches Instructional Design and Learning outcomes assessment courses. His research focuses on the impact of teacher education, student learning and program outcomes assessment, and distance education. He earned his doctorate from the Neag School of Education at the University of Connecticut. Recent scholarship includes investigating student achievement in charter schools, analyzing key variables in teacher working conditions and the implementation of eportfolios in candidate assessment. Sharon Lund O’Neil is a Professor of Organizational Leadership in the Department of Information and Logistics Technology at the University of Houston. She received her Ph.D. from the University of Illinois and has teaching and administrative experience in five states. Her main research interests include leadership and soft skill development. She has made hundreds of professional presentations in the US (47 states) and abroad. She has been the recipient of more than 50 competitive grant awards (totaling nearly $3 million) with resulting products distributed around the world. Her over 100 professional publications include several books, the most recent being “Your Attitude Is Showing! A Primer of Human Relations” (2008) which also has been translated into Chinese, Polish, and Croatian. Michele Parker is an Assistant Professor of Educational Leadership at UNC Wilmington. Her doctorate is in Research, Statistics, and Evaluation. Her research interests include the use of technology in higher education and K-12 settings. Birgit Leisen Pollack is the Chair of the Marketing Department at the University of Wisconsin Oshkosh. She primarily teaches Consumer Behavior, Marketing Strategy, and International Marketing. She has taught a number of online classes at the MBA level. Her primary research interests include customer satisfaction, customer retention, and services quality. She is also interested in conducting pedagogy research in online education. She has published in Journal of Services Marketing, Journal of Business Research, Journal of Relationship Marketing, Managing Service Quality, Journal of Marketing Education, Marketing Education Review, Annals of Travel Research, and Journal of Travel Research amongst others. Several of her journal and conference publications have won awards for research quality. Javier Saldaña Ramos received an Engineering degree in Computer Science at Carlos III University of Madrid, and Msc in Computer Science and Technology by the same university. He is a PhD student and associate professor in the Computer Science Department at the Carlos III University of Madrid. His research on software engineering is focused on software process improvement through teamwork process improvement. It involves team management improvement, virtual teams management improvement and the use of TSP (Team Software Process) in teamwork. He has several years of experience as a software engineer.
435
About the Contributors
Ion Gh. Roşca is professor at the Academy of Economic Studies, Bucharest. Since 2004 he has been rector of the university. He taught computer programming and ICT. He is author of more than 30 textbooks. The research domains are: knowledge society, e-business, project management, and GRID systems. He published 11 books and more than 50 papers. Antonio de Amescua Seco is a full professor in the Computer Science Department at Carlos III University of Madrid. He has been working as software engineer in a public company (Iberia Airlines) and in a private company (Novotec Consultores) as a software engineering consultant. He founded Progresion, a spin-off company, in order to offer advanced software process improvement services. He received his PhD in computer science from the Universidad Politécnica of Madrid. His research interests include software process improvement, software reuse, software project, management. James Stapleton is an Assistant Professor of Management in the Donald L. Harrison College of Business at Southeast Missouri State University. He holds a Ph.D. in Workforce Education and Development from Southern Illinois University, a Master of Business Administration and Master of Business Education from Southern New Hampshire University, and a Bachelor of Science in Organizational Leadership & Management from Friends University. Dr. Stapleton also serves as the Executive Director of the Center for Innovation and Entrepreneurship at Southeast. The Center is one of the most comprehensive entrepreneurship-focused university centers in the Midwest. His primary research interests include group dynamics, entrepreneurship teaching and capacity building, and online education. He has authored articles in various refereed publications including Delta Pi Epsilon journal, National Association of Business Teacher Educator’s Journal, and the Human Systems Management Journal. He also received the Best Research Paper award twice at the National Business Education Research Conference. Yalın Kılıç Türel was born in Elazig, Turkey and got his Bachelor, Master, and PhD degrees in Computer Science Education, Computer Software, and Educational Sciences respectively. He taught a variety of courses such as introduction to computer sciences, operation systems and office applications, desktop publishing, Web page design, programming languages in several vocational high schools until 2004. He has been working as a teaching and research assistant in Department of Computer Education and Instructional Technology up to present. He taught courses including instructional use of technology, teaching methods, educational technology and material development, and instructional design. He got Fulbright Scholarship in 2007. He worked as a visiting scholar for his PhD dissertation research at Florida State University, Tallahassee, Florida. His research focuses mainly on learning objects, interactive whiteboards, instructional design, Web 2.0 technologies, and technology integration into school settings. Jean-Paul Van Belle is associate professor and Head of the Department of Information System at the University of Cape Town. In the last 8 years, he has authored or co-authored about 15 books/chapters, 15 journal articles, and more than 60 peer-reviewed published conference papers. His key research area is the social and organisational adoption of emerging Information Technologies in a developing world context, including M-commerce, e-government and open source software.
436
About the Contributors
Erman Yukselturk is a researcher at Continuing Education Center, Middle East Technical University, Turkey. Also, he is working as an online coordinator assistant of Information Technologies Certificate Program. His research interests are design, development and implementation of online learning environment, instructional technology, and integration of technology into various learning environments. Yukselturk has an MSc (2003) and a Ph.D. (2007) in computer education and instructional technology, both from Middle East Technical University.
437
438
Index
A academic achievement 300-303, 305, 312 academic disciplines 1, 3, 6, 12, 15, 22, 56 academic performance 44, 54, 150-151, 171, 181, 298-299, 302-304, 306-312, 337, 355 academic programs 24, 232, 234, 248, 298 Academic Quality Improvement Program (AQIP) 233, 235, 243 Academy of Economic Studies (AES) 149, 152154, 168, 181-182 Academy of Management Learning & Education (AMLE) 14-17, 19-22, 39, 46, 50, 52-53, 100, 209 Action Research 58, 86, 195-196, 201-207, 209, 211-212 Americans with Disabilities Act (ADA) 236, 243 analysis of covariance (ANCOVA) 58, 77, 80, 101, 134, 136 Analysis of Moment Structures (AMOS) 58, 61, 74, 78-79, 97, 101, 111, 114, 117, 128, 258 analysis of variance (ANOVA) 46-47, 134, 136139, 143-144, 259, 280-281, 383-385 application sharing 250, 256, 266 asynchronous communication 25, 360, 362, 367, 381 asynchronous instruction 372 auditory representation 321
B Bayesian network 151 behavioral intention 9, 258, 343, 345-347, 349-352, 356 behaviorism 319 Big-Five personality model 294-295, 298-300, 304, 307, 315 Biglan, Anthony 3-4, 17 Blackboard 9, 20, 52, 57, 89, 102, 200, 202-203, 212-213, 250, 304, 343, 375, 381
blended courses 3, 5, 375, 378 blended learning 8, 10, 13-14, 16-19, 34, 38-39, 43, 48-52, 77, 101, 182, 209, 259-260, 287, 328, 354, 375-384, 387-390 blended learning environments 8, 10, 13, 34, 375, 380, 383, 387-390 bootstrap procedure 123
C Canfields Learning Styles Inventory (CLSI) 322 canonical correlation analysis 58 Capability Maturity Model (CMM) 238, 245-246 Causal comparative 133-134, 259 Chi Square Goodness of Fit test 137-138, 140 cluster analysis 111, 150, 165, 167-169, 171, 180181 Clustering 149, 164-167, 169-171, 180, 185 cognitive ability 209, 298-299, 301, 310, 313 cognitive presence 7, 10, 25-35, 141-142, 144-147, 252, 382 collaboration 12, 14, 17, 30-31, 50, 131, 134, 138, 162, 182, 184, 208, 234, 237, 269, 273, 286, 296, 358-359, 361, 370-371, 375, 377, 380, 382-385, 387, 389-390 collaborative learning 8, 15, 27, 34, 148, 163, 184, 241, 255, 264, 289, 355, 369, 371-373, 378 Community of Inquiry (CoI) 7, 10, 12, 16, 18, 2332, 34-35, 130, 140-142, 144-147, 388 computer-based learning 294 computer-supported collaborative learning 372 conjoint analysis 58 constructivism 5, 19, 196, 210, 319-320, 337, 358 Constructivist Learning 208, 212, 358 content management systems (CMS) 153, 276, 327, 375, 380-381, 388 continuous quality improvement (CQI) 232-233, 243, 248 continuous training 317
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
Index
contract activity packages (CAP) 323 control variables 6, 37, 40, 44, 47-49, 56 corporate learners 270, 272, 283-284, 286-287 corporate strategies 268 course flexibility (CFL) 273, 279, 281, 283-287, 292 course formats 40, 43-44, 162, 295, 306 course quality (CQL) 233, 240, 271, 273, 279, 285, 292 covariance-based SEM 110, 128 Cramer’s V 140 curriculum integration 1
database management systems (DBMS) 360-361 data mining 149-151, 155-158, 164, 180-185 Decision Sciences Journal of Innovative Education (DSJIE) 14, 17-21, 32, 50-56, 101-102, 209211, 247, 310, 313, 353, 355 delivery monitoring processes 232 DeLone and McLean (the DM model) 82-84, 99 DeLone-McLean Model of Information System Success 71, 82, 87-88, 102 development planning 268 digital formats 318 Dimensions and Antecedents of VLE Effectiveness 58 discriminant analysis 58, 111, 164 distance education 13, 15, 17-22, 32-34, 50-53, 55-56, 147, 186, 196, 210-212, 233, 236, 241, 243-247, 251, 254, 257, 260-265, 289, 296, 308, 310, 313-314, 317, 336, 340-342, 346, 351-352, 354, 360, 370-371, 389-390 distance education platforms 340-341, 351-352 Distance Learning Technology Center 205
205-212, 231-232, 234-236, 238, 240, 242, 244-247, 258, 260-261, 263-264, 267-275, 277, 279, 282-284, 286-290, 292, 294-297, 311, 313, 315-319, 321, 325, 329-330, 332-338, 340-343, 350-351, 353-356, 372, 375-378, 380381, 389-390 e-learning environment 18, 30, 51, 82-83, 154-155, 181, 199-200, 209, 242, 294 e-learning initiatives 85, 195, 197, 199, 207, 238, 268 E-Learning Maturity Model (eMM) 238, 245 e-learning platforms 149-150, 152, 154-156, 159, 163-164, 167, 172, 180, 318, 333, 335, 338, 351 e-learning research 58, 83, 197, 208, 269 e-learning strategy 268 e-learning success model 85, 87, 195-196, 199-200, 202, 205, 207, 210 E-Learning Success Model of Holsapple and LeePost 85 e-learning systems 57-60, 73, 83, 85-87, 89, 96, 98, 100, 103, 150, 199-202, 207, 211, 268, 272, 290, 316, 318-319, 321, 330, 332-333, 351, 380, 390 e-learning systems’ success (ELSS) 86 e-learning theories 82-83 electronic survey 267, 274, 277, 282, 285-286 electronic whiteboard 250 emotional intelligence (EI) 62, 92, 301-302, 305, 308, 312 emotional stability 299-301, 303-305, 315 enabling learning objects (ELO) 326 extraversion 299-305, 315 extrovert, intuitive, thinking, judgmental (ENTJ) 299 extroverts 299
E
F
D
eboard 256 Economic Cybernetics, Statistics and Informatics (ECSI) 168, 180-181 Educational Data Mining 150, 182-185 Educause learning initiative (ELI) 325 effect sizes (ES) 5, 139-140, 145-146, 148, 352 effort expectancy 340-341, 344-347, 349-352, 356 e-learning 18, 22-23, 30, 32-34, 51-52, 57-60, 6465, 71, 73, 77, 82-87, 89, 96, 98-100, 102-103, 149-150, 152, 154-156, 158-159, 161, 163164, 167, 169, 172, 180-182, 186, 195-203,
face-to-face class 252, 255 face-to-face classes 48, 234 face-to-face environment 305, 378 face-to-face instruction 314, 376-378, 390 face-to-face teaching 318 facilitating conditions 340-341, 343-345, 351 factor analysis 6, 46, 57-58, 77, 103, 110-111, 114, 117, 121, 258, 278-280, 315 formal training 317 formative indicators 117-121, 123, 125, 128
439
Index
G grade point average (GPA) 40, 43-44, 155, 168, 170, 173, 175, 177, 203, 213, 215, 219, 226, 302, 305-306 group work 25, 196, 256, 260, 323, 363, 373 Gutenberg, Johannes 316
H hierarchical linear modeling (HLM) 6, 46-47, 56 higher education 1, 3, 5, 13, 16-20, 22, 31-34, 50, 54-55, 77, 79, 101, 130-131, 142, 147-148, 150, 181-185, 196, 209, 211, 231-237, 242248, 255, 260-263, 287-289, 303, 308, 311, 313, 315, 336, 341-342, 351, 354-355, 358, 365, 368, 370, 372, 375, 388-390 Hybrid/Blended Online Courses 196, 212 hypermedia 294, 329, 336, 388 hypertext pre-processor (PHP) 274, 276, 290, 362, 365
I Index of Learning Styles (ILS) 324, 337 individualized learning 295 individual work 305, 357, 360, 365-368 information and communication technologies (ICT) 184, 186, 233, 268, 288, 316-317, 344, 352, 376, 380 information quality 61, 64-65, 73, 75-77, 82-84, 87, 90, 96, 99, 103, 200, 205-207, 212 information quality dimension 200 information systems (IS) 2-6, 8-10, 12-27, 29-31, 38-40, 43-50, 53-54, 56-65, 67-69, 71, 73-87, 89-94, 96-108, 110-142, 144-146, 148-159, 161-173, 175-186, 188, 190-193, 195-197, 199201, 203, 207-220, 226-228, 231-246, 248-259, 262-264, 266, 268-275, 278-289, 291-309, 311-312, 316-338, 340-354, 356-363, 365-370, 372-373, 375-384, 386-388, 390-391, 438 innovation diffusion theory (IDT) 343 Institute for Education Sciences (IES) 131, 148 Institute for Higher Education Policy (IHEP) 211, 236, 243-245, 262 instructor attitude towards e-learning (IAT) 279, 286, 291 instructor evaluation 24 instructor responses 24 instructor response timeliness towards the learners (IRS) 273, 279, 291 instructor-student communication 24
440
instructor-student interaction 24 intellectual curiosity 301 interaction 5, 7-8, 11-13, 16-17, 19, 24, 32-34, 39, 43-44, 50, 54, 56, 83, 123, 132, 143, 147-148, 159, 161, 163, 182, 186, 197, 236-237, 240242, 244, 250-252, 254-255, 261-262, 264-267, 272-275, 280, 282, 284, 286-287, 293, 295296, 298-300, 320, 336, 344, 349, 355, 359, 364-366, 368, 375-384, 386-387, 389-390 interactive blended learning environment (IBLE) 375, 383-384, 387 Internal Validity 48, 130-131, 133-136, 142, 144, 147-148, 259 International Personality Inventory Pool (IPIP) 303, 312 Internet based learning 267-272, 276-278, 280, 282287, 291 intra-class correlation coefficients (ICC) 47 introvert, intuitive, feeling, judgmental (INFJ) 299 introverts 299, 302, 324
K K-12 education 235, 342 kinesthetic representation 321 knowledge navigator 296
L learner anxiety towards computers (LAX) 273, 279, 291 learner attitude towards computers (LAT) 279, 284, 291 learner-content interaction 254, 266, 382 learner-instructor interaction 8, 12, 254, 266, 382 learner-learner interaction 8, 12, 43, 241, 254, 266, 382 learner maturity 5 learner satisfaction 24, 34, 197, 211, 241-242, 267271, 273-275, 278, 280-287, 290 learner self-efficacy with the internet (LSE) 273, 279, 291 learning content management systems (LCMS) 375 learning management systems (LMS) 57, 153, 268, 328, 341, 360, 375-376, 379-381, 388, 391 learning material 268-270, 282-283, 317, 319 learning models 49, 197, 296, 316, 319, 324, 328, 332, 335, 337-338, 372, 379, 388 learning object aggregations (LOA) 326 learning objects 316, 325-329, 334, 337-338, 389 learning strategy 296, 357-358, 367
Index
learning style 24, 52, 168, 180, 205, 270, 294-297, 301, 310-311, 313-315, 318, 321-324, 332, 334-338 Learning Style Profile (LSP) 324, 337 learning theories 316, 319, 335-336 Likert scale 63, 86, 275, 348 LISREL 57-58, 61, 63, 67-70, 72-75, 77-78, 80-82, 87-88, 90, 92, 94, 96-97, 99, 101-102, 105, 108, 111-117, 122-123, 125-129 LISREL-PLS 110 local governing bodies 231
M Malcolm Baldrige National Quality Award (MBNQA) 232, 245 management information systems (MIS) 15, 17, 20, 52, 56, 78, 101, 103, 123-124, 126-127, 183, 209-210, 263, 288-289, 304, 306, 308, 311312, 314, 354 Microsoft 215-216, 250, 277, 379 model of PC utilization (MPCU) 343 Moodle LMS 360 motivational model (MM) 343, 351 Multicollinearity 112, 115, 120, 128, 349 Multi-Disciplinary Studies 1-2, 6-9, 12-15, 22 multiple regression analysis 45-46, 56, 58, 60 multiple regression models 49, 60-61, 304 multi-sensory instructional packages (MIP) 159, 161, 165-166, 178, 187-188, 190-194, 323 multivariate analysis of covariance (MANCOVA) 58 Myers-Briggs Personality Type Indicators (MBTI) 297-299, 310, 312, 324-325, 337
N National Research Council (NRC) 131, 148 NEO Five Factor Inventory (NEO-FFI) 303, 309 net benefits dimension 200-201, 206-207 neuroticism 300-301 No Child Left Behind Act (NCLB) 130-131, 148
O objectivist learning 212 occupational training 317 online discussion 24-25, 27-29, 33-34, 43, 269 online education 12, 14-15, 18, 21, 32, 34, 51, 55, 101, 154, 158, 163, 187, 209, 232-235, 238239, 242, 245-247, 250-252, 260, 263, 296297, 306-307, 310, 313, 341, 353, 382, 388
online education environments 307 online learning 5-6, 8-9, 12-13, 15-16, 19-21, 2327, 30-33, 35, 44, 46-56, 58, 64, 130, 133, 138, 141-142, 147, 150, 152, 163, 209-211, 216, 231, 233, 237, 239-242, 244-245, 247, 249, 252-255, 260, 262-266, 269, 288-290, 308, 318, 357-358, 360, 364, 369, 371-373, 376378, 380-381, 387-388, 390-391 online learning community 25, 240-241, 358 online learning environments 16, 24-27, 30-32, 47-51, 130, 141, 209, 211, 237, 241, 252, 262, 357-358, 360, 372-373, 390 online session 253 online teaching 1-2, 5-7, 12, 18, 37-38, 45-46, 49, 184, 237, 243, 265, 318 open learning 51, 318, 389 oversight boards 231
P Partial Least Squares (PLS) 6, 58, 83, 101, 110, 112-118, 121-128, 258, 323 path analysis 57-59, 61-64, 66-72, 75, 77, 79-81, 83, 86, 90, 97, 103, 108, 111-112, 114, 123, 127-128, 258 perceived ease of use (DPE) 12, 17, 56, 87, 117, 126, 211, 258, 267, 272-273, 280, 282-284, 286-287, 289, 292 Perceived Learning 5-8, 14, 18, 20, 23-24, 27-30, 32-35, 43, 51-52, 58, 89, 101, 136, 209, 247, 296, 310, 353, 390 perceived usefulness (DPU) 9, 12, 17, 53, 56, 84, 87, 89, 123, 126, 211, 258-259, 261, 267, 272273, 280, 282-284, 286-287, 289, 292, 343 performance expectancy 340-341, 344-347, 349350, 352, 356 Personality Characteristics Inventory (PCI) 303-306 personality traits 261, 294-295, 298-299, 301, 304308, 311 Personal Style Inventory (PSI) 303, 311 prior computer competency 24 program learning sequences (PLS) 6, 58, 110, 112118, 121-128, 258, 323 project-based learning (PBL) 143, 346, 355, 357360, 363-373, 389
Q quality management system 232, 239 quality of the Internet (TIQ) 273, 280, 282-283, 292 quality of the technology (TCQ) 273, 279, 282, 292
441
Index
quantitative content analysis (QCA) 137-138, 144, 148 quasi-experimental comparative designs 39, 56 quasi-experiments 132-133
system quality dimension 200 systems outcomes 86, 98-99, 103 system use 61, 64-65, 72-73, 76-77, 82-83, 87, 8990, 96, 98-99, 103, 197
R
T
randomized experimental designs 6, 48, 56 reflective indicators 117-119, 128 research instruments 121, 383, 388 research methodology 154, 196, 201, 259, 269, 274, 383 Revised NEO Personality Inventory (NEO-PI-R) 303, 307, 309
teaching presence 7, 17, 23, 25-29, 31, 34-35, 54, 130, 141-143, 145-147, 242, 252, 382 team-work 357 technical competency 296 technological glitches 253 Technology Acceptance Model (TAM) 9, 20, 22, 52, 56, 87, 102-103, 211, 258, 271-272, 283284, 287, 289, 341, 343, 354-355 technology-based e-learning 268-269, 271-272, 277 Technology Readiness Index (TRI) 253, 262 technology usage, attitudes toward 340, 345-347, 349, 351, 356 terminal learning objects (TLO) 326 theory of planned behavior (TPB) 343, 351 theory of reasoned action (TRA) 343, 351, 354 time spent on a course 24 Total Quality Management (TQM) 124, 232, 243, 245 true experiments 132-133, 259
S secondary school students 299 self-paced learning 297, 305, 307 SERUDLAP system 340-341, 345-347, 349-352 SERVQUAL 84-85 Sharable Content Object Reference Model (SCORM) 325-326, 329, 337 Sharable Content Objects (SCO) 325-326, 337 SIMPLIS 67, 78, 90, 101-102 Sloan Consortium (Sloan-C) 32, 34, 50, 238-239, 245-246, 249, 260, 263 social cognitive theory (SCT) 125, 344, 352-353 social influence 103, 340, 344-352, 356 social presence 7, 24-25, 27-30, 32, 34-35, 130, 141-143, 145-148, 245, 250-252, 263, 382-384, 386-387, 389-391 Soft Modeling 112-113, 115, 125, 127-128 software development process 373 Stone-Geisser Q-square test 122 structural equation modeling (SEM) 6-7, 20, 55-56, 58-63, 67, 71, 74-75, 77-83, 86-87, 89, 92, 97, 99, 101-104, 108, 110-112, 114, 117-118, 121, 123-124, 126-128, 258, 261, 306 student-instructor interactions 201, 212, 250 student interactive collaboration 296 student motivation 12, 296-297, 317, 321, 335 student retention 179, 295 student-student communication 24 student-student interactions 24, 236, 250 synchronous communication 35, 253, 362 synchronous instruction 373 synchronous virtual classroom (SVC) 249-250, 252, 254-260 system quality 59-61, 64-65, 73, 75-77, 80, 82-84, 87, 90-91, 94, 96, 98-99, 103, 200, 205-207, 212, 228
442
U unified theory of acceptance and use of technology (UTAUT) 87, 340-345, 347, 351-353, 356 University of Cape Town (UCT) 267, 276 U.S. Department of Defense (DoD) 326 U.S. Department of Education 52, 131, 196, 211, 233, 247 user satisfaction dimension 201, 207
V video-on-demand 271 virtual classroom environments (VLE) 58, 153, 375-376, 381 virtual classrooms 53, 250, 254, 265-266, 269, 271, 294 virtual learning environments 39, 56-57, 78, 183, 210, 258, 289, 382 virtual sessions 254 virtual space 155, 318, 338 visual representation 321 voluntariness of use 89, 340, 344-352, 356
Index
W Web 2.0 84, 132, 134-138, 208, 245, 265, 273, 372 Web-based courses 16-17, 31, 50, 53, 136, 289, 294-297, 305-307
Web-based instruction 10, 33, 244, 264, 294-295, 310, 314, 352 Web-based learning 9, 16, 33, 50, 131, 146, 209, 289, 292-294, 315, 337, 370, 379 webcams 256
443