Strategies for Knowledge Management Success:
Exploring Organizational Efficacy Murray E. Jennex San Diego State University, USA Stefan Smolnik EBS University of Business and Law, Germany
InformatIon scIence reference Hershey • New York
Director of Editorial Content: Director of Book Publications: Acquisitions Editor: Development Editor: Publishing Assistant: Typesetter: Production Editor: Cover Design:
Kristin Klinger Julia Mosemann Lindsay Johnston Christine Bufton Milan Vracarich Michael Brehm Jamie Snavely Lisa Tosheff
Published in the United States of America by Information Science Reference (an imprint of IGI Global) 701 E. Chocolate Avenue Hershey PA 17033 Tel: 717-533-8845 Fax: 717-533-8661 E-mail:
[email protected] Web site: http://www.igi-global.com Copyright © 2011 by IGI Global. All rights reserved. No part of this publication may be reproduced, stored or distributed in any form or by any means, electronic or mechanical, including photocopying, without written permission from the publisher. Product or company names used in this set are for identification purposes only. Inclusion of the names of the products or companies does not indicate a claim of ownership by IGI Global of the trademark or registered trademark. Library of Congress Cataloging-in-Publication Data Strategies for knowledge management success : exploring organizational efficacy / Murray Jennex and Stefan Smolnik, editors. p. cm. Includes bibliographical references and index. Summary: "This chapter presents results of a survey looking at how KM practitioners, researchers, KM students, and others interested in KM view what constitutes KM success, including background on KM success and then a series of perspectives on KM/KMS success"--Provided by publisher. ISBN 978-1-60566-709-6 (hbk.) -- ISBN 978-1-60566-710-2 (ebook) 1. Knowledge management. I. Jennex, Murray E., 1956- II. Smolnik, Stefan, 1970HD30.2.S796 2010 658.4'038--dc22 2009052394 British Cataloguing in Publication Data A Cataloguing in Publication record for this book is available from the British Library. All work contributed to this book is new, previously-unpublished material. The views expressed in this book are those of the authors, but not necessarily of the publisher.
List of Reviewers Rodrigo Baroni de Carvalho, FUMEC University, Brazil Vittal S. Anantatmula, Western Carolina University, USA Kerstin Fink, University of Innsbruck, Austria Hannu Kivijärvi, Helsinki School of Economics, Finland P. López Sáez, Universidad Complutense de Madrid, Spain Shahnawaz Muhammed, Fayetteville State University, USA Alexander Orth, Accenture, Germany Vincent M. Ribière, Bangkok University, Thailand Silke Weiß, Federal Ministry of Finance, Austria Suzanne Zyngier, LaTrobe University, Australia Thomas Menkhoff, Singapore Management University, Singapore
Table of Contents
Preface ................................................................................................................................................. xiv Section 1 Knowledge Management Success Chapter 1 Towards a Consensus Knowledge Management Success Definition ...................................................... 1 Murray E. Jennex, San Diego State University, USA Stefan Smolnik, EBS University of Business and Law, Germany David T. Croasdell, University of Nevada, Reno, USA Chapter 2 A Model of Knowledge Management Success ..................................................................................... 14 Murray E. Jennex, San Diego State University, USA Lorne Olfman, Claremont Graduate University, USA Chapter 3 Market Knowledge Management, Innovation and Product Performance: Survey in Medium and Large Brazilian Industrial Firms .................................................................................................... 32 Cid Gonçalves Filho, FUMEC University, Brazil Rodrigo Baroni de Carvalho, FUMEC University, Brazil George Leal Jamil, FUMEC University, Brazil Chapter 4 Does KM Governance = KM Success? Insights from a Global KM Survey........................................ 51 Suzanne Zyngier, La Trobe University, Australia Chapter 5 An Evaluation of Factors that Influence the Success of Knowledge Management Practices in US Federal Agencies ........................................................................................................................ 74 Elsa Rhoads, The George Washington University, Institute of Knowledge & Innovation, USA Kevin J. O’Sullivan, New York Institute of Technology, USA Michael Stankosky, The George Washington University, USA
Section 2 KM Measurements Chapter 6 Process Model for Knowledge Potential Measurement in SMEs ......................................................... 91 Kerstin Fink, University of Innsbruck, Austria Chapter 7 Developing Individual Level Outcome Measures in the Context of Knowledge Management Success .......................................................................................................................... 106 Shahnawaz Muhammed, American University of Middle East, Kuwait William J. Doll, University of Toledo, USA Xiaodong Deng, Oakland University, USA Chapter 8 Validating Distinct Knowledge Assets: A Capability Perspective ...................................................... 128 Ron Freeze, Emporia State University, USA Uday Kulkarni, Arizona State University, USA Chapter 9 Assessing Knowledge Management: Refining and Cross-Validating the Knowledge Management Index (KMI) using Structural Equation Modeling (SEM) Techniques ......................... 150 Derek Ajesam Asoh, Southern Illinois University Carbondale, USA & National Polytechnic, University of Yaounde, Cameroon Salvatore Belardo, University at Albany, USA Jakov (Yasha) Crnkovic, University at Albany, USA Chapter 10 A Relational Based-View of Intellectual Capital in High-Tech Firms................................................ 179 G. Martín De Castro, Universidad Complutense de Madrid, Spain P. López Sáez, Universidad Complutense de Madrid, Spain J.E. Navas López, Universidad Complutense de Madrid, Spain M. Delgado-Verde, Universidad Complutense de Madrid, Spain Section 3 KM Strategies in Practice Chapter 11 The Effect of Organizational Trust on the Success of Codification and Personalization KM Approaches .................................................................................................................................. 192 Vincent M. Ribière, Bangkok University, Thailand
Chapter 12 Advancing the Success of Collaboration Centered KM Strategy ...................................................... 213 Johanna Bragge, Aalto University School of Economics, Finland Hannu Kivijärvi, Aalto University School of Economics, Finland Chapter 13 The Relevance of Integration for Knowledge Management Success: Towards Conceptual and Empirical Evidence .................................................................................... 238 Alexander Orth, Accenture, Germany Stefan Smolnik, EBS University of Business and Law, Germany Murray Jennex, San Diego State University, USA Chapter 14 Strategies for Successful Implementation of KM in a University Setting .......................................... 262 Vittal S. Anantatmula, Western Carolina University, USA Shivraj Kanungo, George Washington University, USA Chapter 15 DYONIPOS: Proactive Knowledge Supply ....................................................................................... 277 Silke Weiß, Federal Ministry of Finance, Austria Josef Makolm, Federal Ministry of Finance, Austria Doris Ipsmiller, m2n development and consulting gmbh, Austria Natalie Egger, Federal Ministry of Finance, Austria Compilation of References .............................................................................................................. 288 About the Contributors ................................................................................................................... 317 Index ................................................................................................................................................... 325
Detailed Table of Contents
Preface ................................................................................................................................................. xiv Section 1 Knowledge Management Success Chapter 1 Towards a Consensus Knowledge Management Success Definition ...................................................... 1 Murray E. Jennex, San Diego State University, USA Stefan Smolnik, EBS University of Business and Law, Germany David T. Croasdell, University of Nevada, Reno, USA This chapter explores knowledge management, KM, and knowledge management system, KMS, success. The inspiration for this chapter is the KM Success and Measurement minitracks held at the Hawaii International Conference on System Sciences in January of 2007 and 2008. KM and KMS success are issues needing to be explored. The Knowledge Management Foundations workshop held at the Hawaii International Conference on System Sciences (HICSS-39) in January 2006 discussed this issue and reached agreement that it is important for the credibility of the KM discipline that we be able to define KM success. Additionally, from the perspective of KM academics and practitioners, identifying the factors, constructs, and variables that define KM success is crucial to understanding how these initiatives and systems should be designed and implemented. This chapter presents results of a survey looking at how KM practitioners, researchers, KM students, and others interested in KM view what constitutes KM success. The chapter presents some background on KM success and then a series of perspectives on KM/KMS success. These perspectives were derived by looking at responses to questions asking academics and practitioners how they defined KM/KMS success. The chapter concludes by presenting the results of an exploratory survey on KM/KMS success beliefs and attitudes. Chapter 2 A Model of Knowledge Management Success ..................................................................................... 14 Murray E. Jennex, San Diego State University, USA Lorne Olfman, Claremont Graduate University, USA
This chapter describes a knowledge management, KM, Success Model that is derived from observations generated through a longitudinal study of KM in an engineering organization, KM success factors found in the literature, and modified by the application of these observations and success factors in various projects. The DeLone and McLean (1992, 2003) IS Success Model was used as a framework for the model as it was found to fit the observed success criteria and it provided an accepted theoretical basis for the proposed model. Chapter 3 Market Knowledge Management, Innovation and Product Performance: Survey in Medium and Large Brazilian Industrial Firms .................................................................................................... 32 Cid Gonçalves Filho, FUMEC University, Brazil Rodrigo Baroni de Carvalho, FUMEC University, Brazil George Leal Jamil, FUMEC University, Brazil In a business environment characterized by a high level of competitiveness, the impact of new products on an organization’s revenue is an important factor. This research was developed with the objective of examining empirically the relationships between market knowledge management, innovation and the performance of new products in the market. This chapter analyzes KM (Knowledge Management) success trough a market-oriented perspective because, at the end of the day, KM success must lead to better organizational performance. The research model was generated by the combination of market knowledge models and KM success and maturity models. By means of a survey, based on 387 medium and large industrial firms, and the use of structural equation modeling, the supremacy of the competitor knowledge management process over other constructs was verified, as the most important antecedent of new product performance in the market. The results also revealed that innovation was strongly impacted from technology knowledge management and customer knowledge management. Chapter 4 Does KM Governance = KM Success? Insights from a Global KM Survey........................................ 51 Suzanne Zyngier, LaTrobe University, Australia This chapter examines factors that contribute to KM success by differentiating between KM leadership through management and through governance. We look at governance as a structural mechanism that both embeds KM into organizational activity, and lifts it from a series of initiatives to a structured program of activities that are subject to authority, policy, risk management, financial fiduciary duty, and evaluation. Using evidence from 214 respondents to a global internet based KM survey; we find that having a recognized and defined authority for KM that is well-resourced leads to strategically aligned benefits realized from investment in KM. We demonstrate that governance through assigned authority strongly contributes to strategic KM success. Chapter 5 An Evaluation of Factors that Influence the Success of Knowledge Management Practices in US Federal Agencies ........................................................................................................................ 74 Elsa Rhoads, The George Washington University, Institute of Knowledge & Innovation, USA Kevin J. O’Sullivan, New York Institute of Technology, USA Michael Stankosky, The George Washington University, USA
This research chapter investigates the status of knowledge management practices implemented across federal agencies of the U.S. government. It analyzes the extent to which this status is influenced by the size of the agency, whether or not the agency type is a Cabinet-level Department or Independent Agency, the longevity of KM Practices implemented in the agency, whether or not the agency has adopted a written KM policy or strategy, and whether the primary responsibility for KM Practices in the agency is directed by a CKO or KM unit versus other functional locations in the agency. The research also tests for possible KM practitioner bias, since the survey was directed to members of the Knowledge Management Working Group of the Federal CIO Council who are KM practitioners in federal agencies. Section 2 KM Measurements Chapter 6 Process Model for Knowledge Potential Measurement in SMEs ......................................................... 91 Kerstin Fink, University of Innsbruck, Austria Knowledge measurement is developing into a new research field in the area of knowledge management. To ensure that a company is successful, business, technology, and human elements must be integrated and balanced into a knowledge measurement system. The introduction of a knowledge audit with the objective to uncovering the tacit knowledge in an organization and of identifying the existing management practices is needed. This chapter uses the quantum mechanical thinking as a reference model for the development of a knowledge potential measurement system. This system is influenced by three measurement components: (1) Person-dependent variables, (2) System-dependent variables and (3) knowledge velocity. Based on several case studies conducted in small and medium-sized enterprises, a process model for the implementation of the knowledge potential framework is discussed and introduced. Future research and limitations of the model are discussed in the final part. Chapter 7 Developing Individual Level Outcome Measures in the Context of Knowledge Management Success .......................................................................................................................... 106 Shahnawaz Muhammed, American University of Middle East, Kuwait William J. Doll, University of Toledo, USA Xiaodong Deng, Oakland University, USA Success of organizational level knowledge management initiatives depends on how effectively individuals implementing these initiatives use their knowledge to bring about outcomes that add value in their work. To facilitate assessment of individual level outcomes in the knowledge management context, this research provides a model of interrelationships among individual level knowledge management success measures which include conceptual knowledge, contextual knowledge, operational knowledge, innovation, and performance. The model was tested using structural equation modeling based on data collected from managerial and professional knowledge workers. The results suggest that conceptual knowledge enhances operational and contextual knowledge. Contextual knowledge improves operational knowledge and is also a key predictor of innovations. The innovativeness of an
individual’s work along with operational knowledge enhances work performance. The results support the proposed model. This model can potentially be used for measuring knowledge management success at the individual level. Chapter 8 Validating Distinct Knowledge Assets: A Capability Perspective ...................................................... 128 Ron Freeze, Emporia State University, USA Uday Kulkarni, Arizona State University, USA Identification and measurement of organizational Knowledge Management capabilities is necessary to determine the extent to which an organization utilizes its knowledge assets. We developed and operationalized a set of constructs to measure capabilities associated with management of knowledge assets identified as distinct Knowledge Capabilities (KCs) comprising the overall Knowledge Management (KM) capability of an organizational unit. Each KC represents a distinct kind of knowledge that requires different organizational process and technological support. This delineation of knowledge allows targeted improvement to a specific KC. We present validation of these capability constructs with empirical evidence from two separate business units in a large semi-conductor manufacturing company, providing the basis of measurement standardization for KM Capability improvement. Confirmatory factor analysis affirmed four KCs, each identified as an overall factor influencing a set of latent descriptor variables. Second Order and General-Specific Structural Equation Models of each capability provide evidence as to the validity of measurement of these knowledge assets. A standardized instrument for measuring knowledge capabilities would not only allow benchmarking, but also allow tracking capabilities over time and linking them to those performance metrics that are deemed appropriate by the organization. Chapter 9 Assessing Knowledge Management: Refining and Cross-Validating the Knowledge Management Index (KMI) using Structural Equation Modeling (SEM) Techniques ......................... 150 Derek Ajesam Asoh, Southern Illinois University Carbondale, USA & National Polytechnic, University of Yaounde, Cameroon Salvatore Belardo, University at Albany, USA Jakov (Yasha) Crnkovic, University at Albany, USA With growing interest in KM-related assessments and calls for rigorous assessment tools, the objective of this study was to apply SEM techniques to refine and cross-validate the KMI, a metric to assess the degree to which organizations are engaged in knowledge management (KM). Unlike previous KM metrics research that has focused on scales, we modeled the KMI as a formative latent variable, thereby extending knowledge on formative measures and index creation from other fields into the KM field. The refined KMI metric was tested in a nomological network and found to be robust and stable when cross-validated; thereby demonstrating consistent prediction results across independent data sets. The study also verified the hypothesis that the KMI is positively correlated with organizational performance (OP). Research contributions, managerial implications, limitations of the study, and direction for further research are discussed.
Chapter 10 A Relational Based-View of Intellectual Capital in High-Tech Firms................................................ 179 G. Martín De Castro, Universidad Complutense de Madrid, Spain P. López Sáez, Universidad Complutense de Madrid, Spain J.E. Navas López, Universidad Complutense de Madrid, Spain M. Delgado-Verde, Universidad Complutense de Madrid, Spain The Resource-Based Theory (RBT) has tried to test the role of strategic resources on sustained competitive advantage and superior performance. Although this theory has found several flaws in order to reach its objective effectively (Priem & Butler, 2001; Foss & Knudsen), recent proposals have suggested that these problems can be overcome (Peteraf & Barney, 2003). This solution requires paying a greater attention to the analysis of knowledge stocks, developing a mid-range theory: the Intellectual Capital-Based View (Reed, Lubatkin & Srinivasan, 2006). This mid-range and pragmatic theory allows the hypotheses development and empirical testing in a more effective way than the Resource Based View (RBV). There is a certain degree of general agreement about the presence of human capital and organizational capital as the main components of intellectual capital, as well as about the fact that the configuration of knowledge stocks will vary from one industry and firm to another one. Taking these assumptions as a starting point, this chapter explores the configuration of intellectual capital that can be empirically found on a sample of high-technology firms. Our findings highlight the importance of relational capital, which must be divided in business and alliance capital, so the strategic alliances play a relevance role in the type of firms that have been included in our research. Section 3 KM Strategies in Practice Chapter 11 The Effect of Organizational Trust on the Success of Codification and Personalization KM Approaches .................................................................................................................................. 192 Vincent M. Ribière, Bangkok University, Thailand Knowledge Management (KM) initiatives are expanding across all types of organizations worldwide. However, not all of them are necessarily successful mainly due to an unfriendly organizational culture. Organizational trust is often mentioned as a critical factor facilitating knowledge sharing. For this research we took an empirical approach to validate this assumption. The purpose of this research is to explore the relationships between organizational trust, a knowledge management strategy (codification vs. personalization) and its level of success. This study was conducted among 97 US companies involved in knowledge management. A survey tool was developed and validated to assess the level of trust, the level of success and the dominant KM strategy deployed by an organization. Six main research hypotheses and a conceptual model were tested. The findings show the impact of trust on the choice of the KM strategy as well as on the level of success.
Chapter 12 Advancing the Success of Collaboration Centered KM Strategy ...................................................... 213 Johanna Bragge, Aalto University School of Economics, Finland Hannu Kivijärvi, Aalto University School of Economics, Finland Knowledge is today more than ever the most critical resource of organizations. At the same time it is, however, also the least-accessible resource that is difficult to share, imitate, buy, sell, store, or evaluate. Organizations should thus have an explicit strategy for the management of their knowledge resources. In this chapter we pay special attention to a KM strategy called collaboration centered strategy. This strategy builds on the assumption that a significant part of personal knowledge can be captured and transferred, and new knowledge created through deep collaboration between the organization’s members. A critical element in the collaboration centered KM strategy is the facilitation process that involves managing relationships between people, tasks and technology. We describe how the Collaboration Engineering approach with packaged facilitation techniques called ThinkLets is able to contribute to this endeavour. Chapter 13 The Relevance of Integration for Knowledge Management Success: Towards Conceptual and Empirical Evidence .................................................................................... 238 Alexander Orth, Accenture, Germany Stefan Smolnik, EBS University of Business and Law, Germany Murray Jennex, San Diego State University, USA Many organizations pursue knowledge management (KM) initiatives, with different degrees of success. One key aspect of KM often neglected in practice is following an integrated and holistic approach. Complementary, KM researchers have increasingly focused on factors that determine KM success and examined whether the metrics used to measure KM initiatives are reasonable. In this chapter, the importance of integration issues for successful KM is analyzed by means of a case study of a KM initiative at an international consulting company. The investigations demonstrate the importance of an integrated KM approach – an integrated view of KM strategy, KM processes, KM technology, and company culture – to ensure KM success. Chapter 14 Strategies for Successful Implementation of KM in a University Setting .......................................... 262 Vittal S. Anantatmula, Western Carolina University, USA Shivraj Kanungo, George Washington University, USA Research has identified enabling factors and inhibitors for implementing knowledge management successfully and to accomplish its strategic objectives. However, it is important to understand how these factors interact with each other to improve or inhibit the performance. With this in mind, this chapter presents a model, based on a research study, to determine underlying relations among these factors and develop strategies implementing KM initiatives.
Chapter 15 DYONIPOS: Proactive Knowledge Supply ....................................................................................... 277 Silke Weiß, Federal Ministry of Finance, Austria Josef Makolm, Federal Ministry of Finance, Austria Doris Ipsmiller, m2n development and consulting gmbh, Austria Natalie Egger, Federal Ministry of Finance, Austria Traditional knowledge management is often combined with extra work to recollect information which is already electronically available. Another obstacle to overcome is to make the content of the collected information easily accessible to enquiries, as conventional searching tools provide only documents and not the content meaning. They are often based on the search for character strings, usually resulting in many unnecessary hits and no or less context information. The research project DYONIPOS focuses on detecting the knowledge needs of knowledge users and automatically providing the required knowledge just in time, while avoiding additional work and violations of the knowledge worker’s privacy, proposing a new way of support. This knowledge is made available through semantic linkage of the relevant information out of existing artifacts. In addition DYONIPOS creates an individual and an organizational knowledge base just in time. Compilation of References .............................................................................................................. 288 About the Contributors ................................................................................................................... 317 Index ................................................................................................................................................... 325
xiv
Preface
Organizations use KM (Knowledge Management), because it makes sense. KM, when done successfully, has an impact on the organization and its members. How do organizations define and measure success or its impact on the organization? Also, while knowing that KM improves an organization may be enough to encourage organizations to pursue a KM initiative, many organizations still need to quantitatively justify an investment in KM. Calculating Return on Investment (ROI), is a popular approach, but how is this done? There are some commonly accepted first steps: • • • •
Find a need or an opportunity that KM satisfies, supports, or resolves. Identify the costs with the need or the benefits of the opportunity. Identify the savings or potential earnings that implementing KM will provide. Identify the costs of implementing KM.
Easily stated but not easily done and the resulting financial numbers are often questionable. Do the numbers present the full story for KM? Many think they do not, and that stories and anecdotes about KM need to be included to make KM real to management (Moore, 2008). However, is this enough measurement for an organization? This book is about how to implement successful KM initiatives. What is required for KM to be successful? Jennex and Olfman (2005) summarized and synthesized the literature on KM/KMS’s critical success factors (CSF) into an ordered set of twelve KM CSFs identified from 17 studies of more than 200 KM projects. These CSFs were thereafter sequentially ordered according to the number of studies identifying them: • • • • • • • • •
A knowledge strategy that identifies users, sources, processes, storage strategy, knowledge, and links to knowledge for the KMS; Motivation and commitment of users, including incentives and training; Integrated technical infrastructure, including networks, databases/repositories, computers, software, and KMS experts; An organizational culture and structure that supports learning and the sharing and use of knowledge; A common enterprise-wide knowledge structure that is clearly articulated and easily understood; Senior management support, including allocation of resources, leadership, and training; Learning organization; The KMS has a clear goal and purpose; Measures are established to assess the impacts of the KMS and the use of knowledge, as well as verification that the right knowledge is being captured;
xv
• • •
The search, retrieval, and visualization functions of the KMS support facilitated use of knowledge; Work processes are designed that incorporate knowledge capture and use; and Knowledge is secured/protected.
While the above CSFs are useful for determining if the antecedents for KM success exist in an organization, they do not state what success is or how to assess it. This book attempts to answer these questions. Three sections are provided: Section 1 discusses KM success. It defines what KM success is, provides a model of KM success, and discusses KM success in a variety of contexts. Section 2 addresses the issue of measuring KM. It is proposed that organizations cannot manage what they cannot measure. This section provides a variety of studies that provide KM measures based on various theoretical perspectives. Finally, knowing how to define KM success and how to measure KM is important, but without a strategy for implementing the KM initiative the organization is not likely to succeed. Section 3 presents several KM strategies as implemented in a variety of contexts. The following paragraphs provide further description of the chapters.
Section 1: Knowledge ManageMent SucceSS Chapter 1: Towards a Consensus Knowledge Management Success Definition by Murray E. Jennex, Stefan Smolnik, David T. Croasdell, explores knowledge management, KM, and knowledge management system, KMS, success. Identifying the factors, constructs, and variables that define KM success is crucial to understanding how these initiatives and systems should be designed and implemented. This chapter presents results of a survey looking at how KM practitioners, researchers, KM students, and others interested in KM view what constitutes KM success. The chapter presents some background on KM success and then a series of perspectives on KM/KMS success. These perspectives were derived by looking at responses to questions asking academics and practitioners how they defined KM/KMS success. The chapter concludes by presenting the results of an exploratory survey on KM/KMS success beliefs and attitudes. Chapter 2: A Model of Knowledge Management Success by Murray E. Jennex, Lorne Olfman, describes a knowledge management, KM, Success Model that is derived from observations generated through a longitudinal study of KM in an engineering organization, KM success factors found in the literature, and modified by the application of these observations and success factors in various projects. The DeLone and McLean (1992, 2003) IS Success Model was used as a framework for the model as it was found to fit the observed success criteria and it provided an accepted theoretical basis for the proposed model. Chapter 3: Market Knowledge Management, Innovation and Product Performance: Survey in Medium and Large Brazilian Industrial Firms by Cid Gonçalves Filho, Rodrigo Baroni de Carvalho, George Leal Jamil. In a business environment characterized by a high level of competitiveness, the impact of new products on an organization’s revenue is an important factor. This research was developed with the objective of examining empirically the relationships between market knowledge management, innovation and the performance of new products in the market. This chapter analyzes KM (Knowledge Management) success through a market-oriented perspective because, at the end of the day, KM success must lead to better organizational performance. The research model was generated by the combination of market knowledge models and KM success and maturity models. By
xvi
means of a survey, based on 387 medium and large industrial firms, and the use of structural equation modeling, the supremacy of the competitor knowledge management process over other constructs was verified, as the most important antecedent of new product performance in the market. The results also revealed that innovation was strongly impacted from technology knowledge management and customer knowledge management. Chapter 4: Does KM Governance = KM Success? Insights from a Global KM Survey by Suzanne Zyngier, examines factors that contribute to KM success by differentiating between KM leadership through management and through governance. We look at governance as a structural mechanism that both embeds KM into organizational activity, and lifts it from a series of initiatives to a structured program of activities that are subject to authority, policy, risk management, financial fiduciary duty, and evaluation. Using evidence from 214 respondents to a global internet based KM survey; we find that having a recognized and defined authority for KM that is well-resourced leads to strategically aligned benefits realized from investment in KM. We demonstrate that governance through assigned authority strongly contributes to strategic KM success. Chapter 5: An Evaluation of Factors that Influence the Success of Knowledge Management Practices in US Federal Agencies, by Elsa Rhoads, Kevin J. O’Sullivan, Michael Stankosky, investigates the status of knowledge management practices implemented across federal agencies of the U.S. government. It analyzes the extent to which this status is influenced by the size of the agency, whether or not the agency type is a Cabinet-level Department or Independent Agency, the longevity of KM Practices implemented in the agency, whether or not the agency has adopted a written KM policy or strategy, and whether the primary responsibility for KM Practices in the agency is directed by a CKO or KM unit versus other functional locations in the agency. The research also tests for possible KM practitioner bias, since the survey was directed to members of the Knowledge Management Working Group of the Federal CIO Council who are KM practitioners in federal agencies.
Section 2: KM MeaSureMentS Chapter 6: Process Model for Knowledge Potential Measurement in SMEs by Kerstin Fink, shows that knowledge measurement is developing into a new research field in the area of knowledge management. To ensure that a company is successful, business, technology, and human elements must be integrated and balanced into a knowledge measurement system. The introduction of a knowledge audit with the objective to uncovering the tacit knowledge in an organization and of identifying the existing management practices is needed. This chapter uses the quantum mechanical thinking as a reference model for the development of a knowledge potential measurement system. This system is influenced by three measurement components: (1) Person-dependent variables, (2) System-dependent variables and (3) knowledge velocity. Based on several case studies conducted in small and medium-sized enterprises, a process model for the implementation of the knowledge potential framework is discussed and introduced. Future research and limitations of the model are discussed in the final part. Chapter 7: Developing Individual Level Outcome Measures in the Context of Knowledge Management Success by Shahnawaz Muhammed, William J. Doll, Xiaodong Deng, Show how success of organizational level knowledge management initiatives depends on how effectively individuals implementing these initiatives use their knowledge to bring about outcomes that add value in their work. To facilitate assessment of individual level outcomes in the knowledge management context,
xvii
this research provides a model of interrelationships among individual level knowledge management success measures which include conceptual knowledge, contextual knowledge, operational knowledge, innovation, and performance. The model was tested using structural equation modeling based on data collected from managerial and professional knowledge workers. The results suggest that conceptual knowledge enhances operational and contextual knowledge. Contextual knowledge improves operational knowledge and is also a key predictor of innovations. The innovativeness of an individual’s work along with operational knowledge enhances work performance. The results support the proposed model. This model can potentially be used for measuring knowledge management success at the individual level. Chapter 8: Validating Distinct Knowledge Assets: A Capability Perspective, by Ron Freeze, Uday Kulkarni, explain how identification and measurement of organizational Knowledge Management capabilities is necessary to determine the extent to which an organization utilizes its knowledge assets. We developed and operationalized a set of constructs to measure capabilities associated with management of knowledge assets identified as distinct Knowledge Capabilities (KCs) comprising the overall Knowledge Management (KM) capability of an organizational unit. Each KC represents a distinct kind of knowledge that requires different organizational process and technological support. This delineation of knowledge allows targeted improvement to a specific KC. We present validation of these capability constructs with empirical evidence from two separate business units in a large semi-conductor manufacturing company, providing the basis of measurement standardization for KM Capability improvement. Confirmatory factor analysis affirmed four KCs, each identified as an overall factor influencing a set of latent descriptor variables. Second Order and General-Specific Structural Equation Models of each capability provide evidence as to the validity of measurement of these knowledge assets. A standardized instrument for measuring knowledge capabilities would not only allow benchmarking, but also allow tracking capabilities over time and linking them to those performance metrics that are deemed appropriate by the organization. Chapter 9: Assessing Knowledge Management: Refining and Cross-Validating the Knowledge Management Index (KMI) using Structural Equation Modeling (SEM) Techniques, by Derek Ajesam Asoh, Salvatore Belardo, Jakov (Yasha) Crnkovic, show how with growing interest in KM-related assessments and calls for rigorous assessment tools, the objective of this study was to apply SEM techniques to refine and cross-validate the KMI, a metric to assess the degree to which organizations are engaged in knowledge management (KM). Unlike previous KM metrics research that has focused on scales, we modeled the KMI as a formative latent variable, thereby extending knowledge on formative measures and index creation from other fields into the KM field. The refined KMI metric was tested in a nomological network and found to be robust and stable when cross-validated; thereby demonstrating consistent prediction results across independent data sets. The study also verified the hypothesis that the KMI is positively correlated with organizational performance (OP). Research contributions, managerial implications, limitations of the study, and direction for further research are discussed. Chapter 10: A Relational Based-View of Intellectual Capital in High-Tech Firms by G. Martín De Castro, P. López Sáez, J.E. Navas López, M. Delgado-Verde. The Resource-Based Theory (RBT) has tried to test the role of strategic resources on sustained competitive advantage and superior performance. Although this theory has found several flaws in order to reach its objective effectively (Priem & Butler, 2001; Foss & Knudsen), recent proposals have suggested that these problems can be overcome (Peteraf & Barney, 2003). This solution requires paying a greater attention to the analysis of knowledge stocks,
xviii
developing a mid-range theory: the Intellectual Capital-Based View (Reed, Lubatkin & Srinivasan, 2006). This mid-range and pragmatic theory allows the hypotheses development and empirical testing in a more effective way than the Resource Based View (RBV). There is a certain degree of general agreement about the presence of human capital and organizational capital as the main components of intellectual capital, as well as about the fact that the configuration of knowledge stocks will vary from one industry and firm to another one. Taking these assumptions as a starting point, this paper explores the configuration of intellectual capital that can be empirically found on a sample of high-technology firms. Our findings highlight the importance of relational capital, which must be divided in business and alliance capital, so the strategic alliances play a relevance role in the type of firms that have been included in our research.
Section 3: KM StrategieS in Practice Chapter 11: The Effect of Organizational Trust on the Success of Codification and Personalization KM approaches by Vincent M. Ribière, explains how Knowledge Management (KM) initiatives are expanding across all types of organizations worldwide. However, not all of them are necessarily successful mainly due to an unfriendly organizational culture. Organizational trust is often mentioned as a critical factor facilitating knowledge sharing. For this research we took an empirical approach to validate this assumption. The purpose of this research is to explore the relationships between organizational trust, a knowledge management strategy (codification vs. personalization) and its level of success. This study was conducted among 97 US companies involved in knowledge management. A survey tool was developed and validated to assess the level of trust, the level of success and the dominant KM strategy deployed by an organization. Six main research hypotheses and a conceptual model were tested. The findings show the impact of trust on the choice of the KM strategy as well as on the level of success. Chapter 12: Advancing the Success of Collaboration Centered KM Strategy by Johanna Bragge, Hannu Kivijärvi, shows that Knowledge is the most critical resource of organizations. At the same time it is, however, also the least-accessible resource that is difficult to share, imitate, buy, sell, store, or evaluate. Organizations should thus have an explicit strategy for the management of their knowledge resources. In this chapter we pay special attention to a KM strategy called collaboration centered strategy. This strategy builds on the assumption that a significant part of personal knowledge can be captured and transferred, and new knowledge created through deep collaboration between the organization’s members. A critical element in the collaboration centered KM strategy is the facilitation process that involves managing relationships between people, tasks and technology. We describe how the Collaboration Engineering approach with packaged facilitation techniques called ThinkLets is able to contribute to this endeavour. Chapter 13: The Relevance of Integration for Knowledge Management Success: Towards Conceptual and Empirical Evidence by Alexander Orth, Stefan Smolnik, Murray Jennex. Many organizations pursue knowledge management (KM) initiatives, with different degrees of success. One key aspect of KM often neglected in practice is following an integrated and holistic approach. Complementary, KM researchers have increasingly focused on factors that determine KM success and examined whether the metrics used to measure KM initiatives are reasonable. In this chapter, the importance of integration issues for successful KM is analyzed by means of a case study of a KM initiative at an international consulting company. The investigations demonstrate the importance of an integrated
xix
KM approach – an integrated view of KM strategy, KM processes, KM technology, and company culture – to ensure KM success. Chapter 14: Strategies for Successful Implementation of KM in a University Setting by Vittal S. Anantatmula, Shivraj Kanungo. Research has identified enabling factors and inhibitors for implementing knowledge management successfully and to accomplish its strategic objectives. However, it is important to understand how these factors interact with each other to improve or inhibit the performance. With this in mind, this chapter presents a model, based on a research study, to determine underlying relations among these factors and develop strategies implementing KM initiatives. Chapter 15: DYONIPOS: Proactive Knowledge Supply by Josef Makolm, Silke Weiß, Doris Ipsmiller, Natalie Egger. Traditional knowledge management is often combined with extra work to recollect information which is already electronically available. Another obstacle to overcome is to make the content of the collected information easily accessible to enquiries, as conventional searching tools provide only documents and not the content meaning. They are often based on the search for character strings, usually resulting in many unnecessary hits and no or less context information. The research project DYONIPOS focuses on detecting the knowledge needs of knowledge users and automatically providing the required knowledge just in time, while avoiding additional work and violations of the knowledge worker’s privacy, proposing a new way of support. This knowledge is made available through semantic linkage of the relevant information out of existing artifacts. In addition DYONIPOS creates an individual and an organizational knowledge base just in time. These chapters come from several sources: some were submitted just to this book, some are expansions of conference/journal articles, and some are taken directly from the International Journal of Knowledge Management (IJKM). Taken together, we believe this book provides researchers, students, and practitioners with an excellent overview of how to implement and measure successful KM and/or knowledge initiatives. We hope you enjoy the book. Murray E. Jennex San Diego State University, USA Stefan Smolnik EBS University of Business and Law, Germany
referenceS Jennex, M.E., & Olfman, L. (2005). Assessing Knowledge Management Success. International Journal of Knowledge Management, 1(2), 33-49. Moore, M. (2008). Justifying Your Knowledge Management Programme. White Paper retrieved on March 30, 2009 from http://innotecture.files.wordpress.com/2008/11/justifying_your_km_prog3.pdf
Section 1
Knowledge Management Success
1
Chapter 1
Towards a Consensus Knowledge Management Success Definition Murray E. Jennex San Diego State University, USA Stefan Smolnik EBS University of Business and Law, Germany David T. Croasdell University of Nevada, Reno, USA
abStract This chapter explores knowledge management (KM), and knowledge management system (KMS), success. The inspiration for this chapter is the KM Success and Measurement minitracks held at the Hawaii International Conference on System Sciences in January of 2007 and 2008. KM and KMS success are issues needing to be explored. The Knowledge Management Foundations workshop held at the Hawaii International Conference on System Sciences (HICSS-39) in January 2006 discussed this issue and reached agreement that it is important for the credibility of the KM discipline that we be able to define KM success. Additionally, from the perspective of KM academics and practitioners, identifying the factors, constructs, and variables that define KM success is crucial to understanding how these initiatives and systems should be designed and implemented. This chapter presents the results of a survey looking at how KM practitioners, researchers, KM students, and others interested in KM view what constitutes KM success. This chapter presents some background on KM success and then a series of perspectives on KM/KMS success. These perspectives were derived by looking at responses to questions asking academics and practitioners how they defined KM/KMS success. The chapter concludes by presenting the results of an exploratory survey on KM/KMS success beliefs and attitudes. DOI: 10.4018/978-1-60566-709-6.ch001
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
Towards a Consensus Knowledge Management Success Definition
bacKground on KM SucceSS Jennex summarized various definitions of KM to propose that KM success be defined as reusing knowledge to improve organizational effectiveness by providing the appropriate knowledge to those that need it when it is needed (Jennex, 2005). KM is expected to have a positive impact on the organization that improves organizational effectiveness. DeLone and McLean use the terms success and effectiveness interchangeably and one of the perspectives proposed in this chapter does the same for KM (DeLone and McLean, 1992 and 2003). Jennex and Olfman (2005) summarized and synthesized the literature on KM/KMS critical success factors, CSFs, into an ordered set of 12 KM CSFs. CSFs were ordered based on the number of studies identifying the CSF. The following CSFs were identified from 17 studies looking at 78 KM projects: •
• •
•
•
•
• •
2
A knowledge strategy that identifies users, sources, processes, storage strategy, knowledge, and links to knowledge for the KMS; Motivation and commitment of users including incentives and training; Integrated technical infrastructure including networks, databases/repositories, computers, software, KMS experts; An organizational culture and structure that supports learning and the sharing and use of knowledge; A common enterprise wide knowledge structure that is clearly articulated and easily understood; Senior management support including allocation of resources, leadership, and providing training; Learning organization; There is a clear goal and purpose for the KMS;
•
•
• •
Measures are established to assess the impacts of the KMS and the use of knowledge as well as verifying that the right knowledge is being captured; The search, retrieval, and visualization functions of the KMS support easy knowledge use; Work processes are designed that incorporate knowledge capture and use; Security/protection of knowledge.
However, these CSFs do not define KM/KMS success; they just say what is needed to be successful. Without a definition of KM/KMS success it is difficult to measure actual success. Measuring KM/KMS success is important • • •
To provide a basis for company valuation, To stimulate management to focus on what is important, and To justify investments in KM activities (Jennex and Olfman, 2005) (Turban and Aronson, 2001).
Besides these reasons from an organizational perspective, the measurement of KM and KMS success is important for building and implementing efficient KM initiatives and systems from the perspective of KM academics and practitioners (Jennex and Olfman, 2005).
PerSPectiVeS on KM/KMS SucceSS The KM workshop at the 2006 HICSS-39 found that there were several perspectives on KM success. This section briefly summarizes these perspectives.
KM Success and effectiveness One perspective on KM success is that KM success and KM effectiveness are interchangeable
Towards a Consensus Knowledge Management Success Definition
and imply the same construct or variable. This is based on the view that effectiveness is a manifestation of success. An example would be increasing decision-making effectiveness to generate a positive impact on the organization resulting in successful KM. This perspective uses both process and outcome measures.
the value that these systems and processes provide to an organization. KM focuses therefore more on the outcome, while KMS focus more on the process. These perspectives are introduced in the following sections.
KM and KMS Success as interchangeable
This perspective views KM success as a process measure. KM success can be described in terms of the efficient achievement of well defined organizational and process goals by means of the systematic employment of both organizational instruments and information and communication technologies for a targeted creation and utilization of knowledge as well as for making knowledge available. KM is a support function to improve knowledge-intensive business processes. An example would be supporting the technologyforecasting process in an IT consulting firm by technical components of a KMS (Henselewski, et al., 2006). Complementary, the effective implementation of knowledge processes (i.e. acquisition, creation, sharing, and codification) is seen as a part of KM success. This perspective focuses therefore on measuring how much KM contributes to improving the effectiveness of business and knowledge processes.
Another perspective is that KM and KMS success are interchangeable. KMS success can be defined as making KMS components more effective by improving search speed, accuracy, etc. As an example, a KMS that enhances search and retrieval functions enhances decision-making effectiveness by improving the ability of the decision maker to find and retrieve appropriate knowledge in a timelier manner. The implication is that by increasing KMS effectiveness, KMS success is enhanced and decision making capability is enhanced leading to positive impacts on the organization. This is how KM success is defined and it is concluded that enhancing KMS effectiveness makes the KMS more successful as well as being a reflection of KM success. The Jennex and Olfman KM Success Model (Jennex and Olfman, 2006), based on the DeLone and McLean (1992, 2003) IS Success Model, combines KM and KMS success and utilizes this perspective.
KM and KMS Success as Separate As opposed to the previous section, this perspective views KM and KMS success as separate measures. It is based on a narrow system view that allows for KMS success that does not translate into KM success. KMS are often seen as a sub-function of KM comprising technical and organizational instruments to implement KM. Thus, KMS success addresses implementation and operation factors in terms of system or process metrics whereas KM success is an assessment of
KM Success as a Process Measure
KM Success as an outcome Measure In contrast, KM success can be viewed as an outcome measure. KM success is therefore seen as a measure of the various outcomes of knowledge process capabilities existing within an organization as a result of undertaken KM initiatives. Typical outcomes in terms of organizational performance are the enhancement of: • • • •
Product and service quality, Productivity, Innovative ability and activity, Competitive capacity and position in the market,
3
Towards a Consensus Knowledge Management Success Definition
• • • •
Proximity to customers and customer satisfaction, Employee satisfaction, Communication and knowledge sharing, and Knowledge transparency and retention.
KM Success as combined Process and outcome Measures The last perspective views KM success as a combination of process and outcome measures. Respective descriptions of KM success focus on improved process effectiveness as well as on achieving actionable outcomes. The first and third perspectives contain examples for this combined approach.
MetHodologY This chapter is exploratory research with the goal of guiding the KM community towards a consensus definition of KM success. The chapter builds on the results of an exploratory and a confirmatory survey (discussed below) reported in Jennex, et al., (2007). These survey results included a definition of KM success and identification of a set of dimensions and measures. As part of the confirmatory survey, respondents were asked what dimensions/measures they would add or delete from a list of those presented. This chapter analyzes these comments by tallying them and then putting them into context by comparing the KM success definition dimensions and measures to the Jennex Olfman (2006) KM Success Model. The exploratory survey was generated through an expert panel approach. The 30 members of the editorial review board of the International Journal of Knowledge Management, IJKM, were asked to provide their definitions of KM success. Thirteen responses were received. These responses were used to generate an exploratory survey of KM success, which used 5-point Likert scale items to
4
solicit agreement on various perspectives and proposed KM success definitions. The perspectives were generated through an analysis of the expert board responses that distinguished two groups. The first grouping examined the measures used to determine KM success. Three subgroups were then observed: process-based measures, outcomebased measures, and a combination of process and outcome based measures. The second grouping of responses provided two subgroups: those that combined KM and KMS success measures and those that viewed KM and KMS success as separate measures. A final observation was that many proposed definitions used success and effectiveness interchangeably. The exploratory survey also collected data on the KM expertise and focus of the respondents. Furthermore, the survey offered text boxes that allowed for free form input of additional KM success factors or measures, KM success definitions, and thoughts on the differences between KM and KMS success. The exploratory survey was administered using a web form with data collected and stored automatically. Survey respondents were solicited via broadcast emails to the ISWorld and DSI email list servers, to lists of KM researchers maintained by the authors, and to the editorial review board and list of authors for the International Journal of Knowledge Management, IJKM. An initial request was sent followed by a second request approximately one week later. One hundred and three usable survey responses were received. Thirteen were from KM practitioners, 70 were from KM researchers, 6 were from KM students, and 14 were from others including academics interested in KM but not active KM researchers. Likert items were analyzed using means and standard deviations as no hypotheses have been proposed and need testing. The results of the exploratory survey were used to generate a second survey. This survey presented a composite definition of KM success and a set of measures for each of the indicated dimensions. A 7-point Likert scale was used to solicit agreement
Towards a Consensus Knowledge Management Success Definition
Table 1. Opinions on KM success perspectives, mean (std dev) (5-point Likert scale) Definition
Overall
Research
Practice
Other
Student
Success = Effectiveness
3.1 (1.4)
3 (1.4)
3.3 (1.3)
3.2 (1.5)
3.7 (0.5)
KM = KMS Success
2.6 (1.5)
2.5 (1.4)
3.2 (1.6)
3.4 (1.5)
2.2 (1)
KM = KMS Measures
2.6 (1.4)
2.4 (1.4)
3.2 (1.6)
3 (1.4)
2.4 (0.9)
KM Success = Process
2 (1)
1.9 (0.9)
2.2 (1.1)
1.9 (0.8)
3 (1.3)
2 (1)
2 (1)
2.2 (1.4)
1.7 (0.8)
2.3 (1)
4 (0.9)
3.9 (1)
3.8 (1)
4.3 (0.6)
4.2 (0.8)
KM Success = Outcomes KM Success = Process & Outcomes
on the composite definition and each set of measures. Additionally, as in the exploratory survey items were provided for collecting data on KM expertise and respondent focus. Also, each set of measures had boxes where respondents could indicate measures they would add or remove from each set of measures. The second survey was also administered using a web form with respondents solicited in the same manner as the exploratory survey. One hundred and ninety-four usable survey responses were received. Sixteen were from KM practitioners, 114 were from KM researchers, 23 from KM students, and 41 were from others including academics interested in KM but not active KM researchers. Likert items were analyzed using means and standard deviations as no hypotheses have been proposed and need testing.
reSultS There was little consensus on KM success perspective or definition from the first survey while we did find agreement on a definition of KM success and measures of success in the second survey. The results of the first survey are summarized in Tables 1-3 while the results of the second survey are presented in Table 4. Table 1 presents opinions with respect to the perspectives on KM success. The only perspective that tends to have any consensus agreement is that KM success is a combination of process and
outcome measures and is NOT just process or just outcomes. We are undecided if success and effectiveness are equivalent measures and tend to be undecided to slightly against the idea that KM and KMS success are equivalent. Overall n = 103, researcher n = 70, practitioner n=13, academics n=14, and student n=6, Values are rounded to 2 significant digits Table 2 summarizes opinions on five suggested components of KM and KMS success definitions. There appears to be consensus on using organization-specific subjective measures derived for KM process capabilities. Examples of these capabilities include knowledge reuse, quality, relevance, effectiveness of acquisition, search, and application of knowledge, etc. There also appears to be consensus that any KM success definition should include providing the appropriate knowledge when needed. Additionally, there is consensus that use is not a good measure of KMS success. It is interesting to note that practitioners and students support the use of firm performance measures as indicators of KM success while there is less support for these measures from researchers and academics. It is also interesting to note that academics and students tend to support the use of measures reflecting direct returns from organizational and individual learning and application of knowledge while researchers and practitioners are less favorable to them. Overall n = 103, researcher n = 70, practitioner n=13, academics n=14, and student n=6, Values are rounded to 2 significant digits
5
Towards a Consensus Knowledge Management Success Definition
Table 2. Opinions on KM and KMS success definition components, mean (std dev) (5-point Likert scale) Overall
Research
Practice
Other
Student
“Subjective measure of various outcomes of KM processes capabilities” should be included in a definition of KM success 4.1 (0.8)
4 (0.9)
4.3 (0.8)
4.2 (0.9)
4.5 (0.8)
“Achieving direct returns from learning and projection” should be included in a definition of KM success 3.8 (1)
3.7 (1)
4 (1)
3.6 (1)
4.3 (0.5)
“Success of KMS should be measured in terms of pure usage statistics” should be included in a definition of KM success 2.5 (1.2)
2.5 (1.2)
2.2 (1.1)
2.6 (1.2)
2.8 (1.2)
“Success of KMS should be measured in terms of firm performance” should be included in a definition of KM success 3.7 (1)
4.1 (1)
3.6 (1.1)
3.5 (0.8)
4 (0.9)
“Providing the appropriate knowledge when needed” should be included in a definition of KM success 4.2 (0.9)
4.2 (0.9)
4.3 (0.9)
4.4 (0.6)
4.3 (0.5)
Table 3. Opinions on KM and KMS success definitions; mean (std dev) (5-point Likert scale) Overall
Research
Practice
Other
Students
KMS success can be defined as making KMS components more effective by improving search speed, accuracy, etc. 3 (1.2)
2.8 (1.1)
3.6 (1.2)
3.1 (1.1)
3.2 (1)
KM success is the ability to leverage knowledge resources to achieve actionable outcomes. 4 (1)
4 (1)
4.3 (0.9)
3.9 (0.9)
3.7 (1)
KM success is reusing knowledge to improve organizational effectiveness by providing the appropriate knowledge to those that need it when it is needed. 3.9 (1)
3.8 (1.1)
4.4 (0.91)
4.1 (0.7)
3.8 (0.4)
KM success is knowledge – tacit and explicit alike – circulates freely throughout the organization, with no debilitating clumping, clotting, or hemorrhaging. 3 (1.2)
2.8 (1.2)
3.2 (1.5)
3.4 (0.8)
2.7 (1)
KM success is the efficient achievement of well defined organizational and process goals by means of the systematic employment of both organizational instruments and information and communication technologies for a targeted creation and utilization of knowledge as well as for making knowledge available. 3.7 (1.2)
3.5 (1.3)
4.2 (1.1)
Table 3 summarizes opinions on five suggested definitions of KM and KMS success. There appears to be little consensus on these definitions other than a general neutrality on KM success as the flow of knowledge and KMS success as improving effectiveness of the KMS components. However, there are some interesting observations. KM success as the ability to leverage knowledge resources to achieve actionable outcomes is overall supported with the strongest support
6
3.8 (0.9)
3.8 (1.2)
coming from practitioners. This is interesting but not surprising as practitioners tend to favor definitions and measures that are objective, readily measurable, and have an obvious impact on the organization. This is also why practitioners favor KM success as reusing knowledge to improve organizational effectiveness and KM success as the efficient achievement of well defined organizational goals for targeted creation and utilization of knowledge.
Towards a Consensus Knowledge Management Success Definition
Table 4. Opinions on KM and KMS success definition and sets of measures, mean (std dev) (5-point Likert scale) Overall
Research
Practice
Other
Student
KM success is a multidimensional concept. It is defined by capturing the right knowledge, getting the right knowledge to the right user, and using this knowledge to improve organizational and/or individual performance. KM success is measured using the dimensions of impact on business processes, strategy, leadership, efficiency and effectiveness of KM processes, efficiency and effectiveness of the KM system, organizational culture, and knowledge content. 5.4 (1.4)
5.3 (1.5)
6.1 (1.4)
5.6 (1.4)
5.5 (1.2)
5.7 (1.2)
5.7 (1.0)
5.3 (1.4)
5.7 (1.0)
5.3 (1.3)
5.4 (1.6)
Impact on business process measures 5.5 (1.3)
5.3 (1.4)
5.8 (1.4) Strategy measures
5.3 (1.4)
5.1 (1.6)
5.2 (1.5)
5.1 (1.5)
6.1 (0.6) Leadership measures 5.3 (1.5)
KM process effectiveness and efficiency measures 5.7 (1.3)
5.5 (1.4)
6.2 (0.8)
5.8 (1.3)
5.7 (1.4)
KM system effectiveness and efficiency measures 5.6 (1.3)
5.5 (1.4)
6.0 (0.7)
5.8 (1.2)
5.4 (1.3)
5.7 (1.1)
5.6 (1.2)
5.7 (1.2)
5.5 (1.3)
Learning culture measures 5.6 (1.2)
5.5 (1.4)
5.4 (1.4)
5.2 (1.5)
6.0 (0.8) Knowledge content measures 6.0 (1.0)
Overall n = 103, researcher n = 70, practitioner n=13, academics n=14, and student n=6, Values are rounded to 2 significant digits Table 4 summarizes opinions from the second survey on a proposed success definition generated from the first survey and sets of measures for the dimensions listed in the proposed definition. There appears to be some level of consensus on the proposed definition and measures. However, we do not consider it strong consensus given that the mean response is between agree and somewhat agree. Still, this is considered a strong beginning to establishing a common definition and set of success measures. Overall n = 194, researcher n = 114, practitioner n=16, others n=41, and student n=23, Values are rounded to 2 significant digits The comments were used to adjust the measures identified in the survey. However, a simple tallying
of the comments and adjusting the measures based on this tally was not useful. Instead, the comments suggested that the entire list of dimensions and measures in the context of a KM success model and CSFs had to be reviewed. These findings are discussed in the following paragraphs. The impact on business processes dimension. The comments suggested adding innovation and agility as measures. They also supported removing labor-saving measures, refining learning through mistakes or insights, and clarifying the differences between action and outcome measures. The strategy dimension. In this study, strategy refers to KM that is designed to support organization-wide strategic systems and initiatives. The comments first questioned the meaning of strategy. They also suggested that social network analysis, SNA, measures should be added to provide indicators of cohesiveness, centrality, and the
7
Towards a Consensus Knowledge Management Success Definition
strength of ties. Additional issues were raised with respect to strategy or alignment to strategy also impacting employee performance, and the way in which social capital and knowledge integration measure strategy. The leadership dimension. The comments suggested adding social network analysis, SNA, measures that provide indicators of cohesiveness, centrality, and the strength of ties. The KM process efficiency and effectiveness dimension. The comments questioned whether the measures should be lifecycle-based rather than process-based. Additionally, they suggested considering scalability, changing “safe and effective storage of knowledge” to “secure, private, and reliable storage of knowledge”. However, these terms have conceptual definitions that differ from “safe2, while “effective” in terms of storage is difficult to define. The comments furthermore questioned whether increased collaboration is a true measure for this dimension. The KMS effectiveness and efficiency dimension. The comments queried the synonymous use of usability and adaptability, questioned whether this dimension does in fact differ from KM process, and suggested that measures like maintenance costs and system measures such as maintainability and availability should be added. The learning culture dimension. The comments questioned change in leadership culture as a leadership measure, and suggested adding organizational learning as well as incentive measures. The knowledge content dimension. The comments questioned whether retrieval does in fact differ from KMS retrieval and suggested adding integrity, temporal, lifecycle, visualization, and multifacetness measures. They furthermore suggested that knowledge creation measures should be part of the KM process dimension. The questions raised by the comments suggest that there may be issues with the dimensions. This drove the analysis of the dimensions with the CSFs and the Jennex Olfman (2006) KM Success Model. An inspection of the list of
8
CSFs reveals conflicts that can affect the success dimensions. CSFs such as organizational culture, learning organization, and senior management support are regarded as necessary for KM to succeed. This in turn raises the question whether a dimension can be a CSF and, simultaneously, a reflection of success. We conclude that this is not likely, that CSFs are indeed necessary for KM success to occur, but are not reflections of KM success in and of themselves. This is borne out by the Jennex Olfman (2006) KM Success Model, as it is a causal model. This suggests that the success dimensions leadership and learning organization should be removed. Moreover, the success dimensions in the Jennex Olfman (2006) KM Success Model leads us to question whether a KMS effectiveness and efficiency dimension and perhaps even a KM process efficiency and effectiveness dimension are required as reflections of KM success. The following section provides a discussion that leads to the final definition of the dimensions of KM success.
diScuSSion This was exploratory research so few conclusions can be drawn. However, using two surveys has allowed us to reach some consensus on a KM success definition and set of success measures. The consensus KM success definition is: “KM success is a multidimensional concept. It is defined by capturing the right knowledge, getting the right knowledge to the right user, and using this knowledge to improve organizational and/ or individual performance. KM success is measured using the dimensions of impact on business processes, strategy, leadership, efficiency and effectiveness of KM processes, efficiency and effectiveness of the KM system, organizational culture, and knowledge content.”
Towards a Consensus Knowledge Management Success Definition
Also, there are a few points of consensus that can be identified from the initial survey: • •
KM success and KMS success may not be the same thing. Usage is not a good measure of KM or KMS success.
Additionally, it is possible that there is a different focus on KM success between practitioners and researchers. Researchers do not seem to have a clear idea of KM success while practitioners appear focused on KM success as being tied to its impact on organizational performance and effectiveness. This cannot be stated conclusively, the number of practitioner responses are too low (n=13) making this supposition. However, it is not unexpected that practitioners would have a focus on organizational impact as a measure of KM and KMS success. Given that KM is an action discipline, researchers should accept this focus and incorporate it into their investigations. The preliminary set of success dimensions must be examined critically, though, as previous discussions have shown that there is conflict between what is regarded as an antecedent and thus necessary for success, and what is regarded as a reflection of success. This is made more complex as factors that are antecedents to KM need to remain to sustain continued KM success. We therefore start this discussion by examining the research behind the CSF of organizational and learning cultures. In an executive development program, Alavi and Leidner (1999) surveyed executive participants with respect to what was required for a successful KMS. They found that organizational and cultural issues associated with user motivation to share and use knowledge are the most significant. Yu et al. find that KM drivers such as a learning culture, knowledge sharing intention, KMS quality, rewards, and KM team activity significantly affect KM performance (Yu, et al., 2004). These conclusions were deduced from a survey of 66
Korean firms. Cross and Baird propose that KM will not improve business performance by simply using technology to capture and share the lessons of experience Cross and Baird (2000). They postulate that for KM to improve business performance, it had to increase organizational learning through the creation of organizational memory. Subsequently, 22 projects were examined to investigate this. The conclusion is that improving organizational learning improves the likelihood of KM success. Chan and Chau (2005) deduce lessons learned from a failed case of KM in a Hong Kong organization and find a need for a knowledge-sharing culture. In their study of KM abandonment in four KM projects, Lam and Chua (2005) identify CSFs for KM from the literature, including a learning culture. Other studies identifying a learning culture as a CSF include Goh (2002), McDermott and O’Dell (2001), Zolingen, et al. (2001). The above research examined successful and failed KM and, on the whole, concludes that an appropriate organizational culture and learning culture are necessary antecedents to KM success, but are not an outcome of KM success although. Nevertheless, it can also be concluded that successful KM should lead to the strengthening of organizational and learning cultures. However, it is difficult to quantify measurements of change in culture, which leads to the decision that organizational and learning cultural measures of KM success should be dropped and used only as CSFs. Leadership is an interesting concept. The CSF of senior management support can be considered leadership and it has been found to be necessary for KM to succeed, but can leadership also be a reflection of KM success? In their above-mentioned study, one of Chan and Chau’s (2005) key findings is the need for continued top management support and involvement. Davenport et al. (1998) studied 31 projects in 24 companies (18 were successful, five were considered failures, and eight were too new to be rated). Eight CSFs, including senior management support, were common in successful KM projects. Jennex and Olfman (2000) studied
9
Towards a Consensus Knowledge Management Success Definition
three KM projects and also observed senior management support as a CSF. In their abovementioned study, Lam and Chua (2005) also identify continuous top management support (as also identified by Storey and Barnett, 2000) as a CSF. Holsapple and Joshi (2000) investigated factors that influenced the management of knowledge in organizations by using a Delphi panel consisting of 31 recognized KM researchers and practitioners and find leadership and top management commitment/support to be crucial. This finding is also supported by Bals et al.’s (2007) study on key success factors for a successful KM initiative in a global bank. Furthermore, several researchers have demonstrated the need to create incentives and motivation within the organization to create and reuse knowledge (Davenport, et al. (1998), Ginsberg and Kambil (1999), Jennex and Olfman (2000), Lam and Chua (2005), Sage and Rouse (1999), Yu, et al. (2004)). Finally, Malhotra and Galletta (2003) identify the critical importance of user commitment and motivation through a survey study of users of a KMS implemented in a health care organization. The above research found that continuous senior management support is a CSF and also necessary for sustaining KM success. Leadership indicates support for KM, providing the management environment that encourages KM through knowledge creation and reuse by members of the organization, and providing adequate resources for the KM/KMS initiative. This is an antecedent to KM success and also an outcome of KM success as successful KM reinforces knowledge leadership. Why do we argue that culture is a CSF but not an output of KM success, while leadership is argued to be both? It is our opinion that culture is not changed quickly, that it takes much time to effect cultural changes but that individuals can be changed quickly, and that success breeds success, i.e. that successful KM will encourage senior management to push KM even more. Strategy as a dimension can be discussed briefly as the only point of contention is what it actually
10
refers to. This dimension refers to the impact of KM on organizational strategy. This can occur through impacts on organizational and/or strategic systems, on strategic intelligence gathering, or merely on fulfilling strategy. This dimension differentiates between impacts on business systems and strategic systems; it examines organizational impacts instead of localized impacts. The decision is therefore that this dimension needs to be renamed and is thus changed to “impacts on strategy”. The next dimensions needing discussion are KM and KMS efficiency and effectiveness. Since this chapter takes the perspective that KM and KMS success are essentially similar, it follows that as success dimensions they should be similar. However, should they even be success dimensions? It is clear that they are antecedents to KM success, but are improvements in efficiency and effectiveness resultants and measures of KM success? Using the Jennex Olfman (2006) KM Success Model, we determine that these two dimensions are not measures of KM success. While it is agreed that improving KM/KMS effectiveness and efficiency will enhance KM and knowledge reuse in an organization, we reject the notion that simply being more effective or efficient in KM/KMS is a reflection of KM success. The final dimension needing discussion is knowledge content. At first, it seems as if this dimension should be treated the same as KM/KMS effectiveness and efficiency. This is, however, rejected. Instead, we accept that knowledge content is a reflection and measure of KM success, as well as being an antecedent to KM success. The Jennex Olfman (2006) KM Success Model is the basis for this determination. The knowledge quality dimension is an antecedent to KM success; however, there is also a feedback process from the impact of KM use to guide further knowledge content and quality. Much like leadership, it is anticipated that KM success will be reflected in the increased quantity and quality of knowledge content; and that a lack of KM success will also be
Towards a Consensus Knowledge Management Success Definition
reflected in a decrease in the quantity and quality of knowledge content. There are some limitations to this research. It is quite possible that the reason little consensus has been observed is because KM and KMS success are complex constructs that are multidimensional. It may be that KM and KMS success includes outcome measures, quality of knowledge, how well the KM processes function, organizational culture measures, usability measures, and strategy measures. This is consistent with the DeLone and McLean model of Information Systems success (DeLone and McLean, 1992 and 2003) and there is much empirical evidence to support the correctness of this model. This model is also the basis of the Jennex and Olfman KM Success Model (Jennex and Olfman, 2006). It is quite likely that the exploratory survey used for this research, while generated using an expert panel, probably did not capture the multidimensional nature of the provided KM success definitions and therefore made it difficult for respondents to find statements they fully agreed with. This limitation was considered when generating the second survey and it appears that this has improved consensus with the KM success definition generated from the first survey.
concluSion It is difficult to reach any conclusions with this research; no hypotheses were proposed or tested. This is okay as the purpose of this chapter is to propose a definition of KMS success. Before doing this it is important to identify areas of consensus and areas of disagreement. The following points are areas of agreement: •
•
KM and KMS success are likely different definitions (note that at least one of the authors greatly disagrees with this point). Use is a poor measure of KM and KMS success.
•
•
KM success is likely a multidimensional construct that will include process and outcome measures. A base definition of KM success is: KM success is reusing knowledge to improve organizational effectiveness by providing the appropriate knowledge to those that need it when it is needed.
Additionally, a base definition of KM success can be established: “KM success is a multidimensional concept. It is defined by capturing the right knowledge, getting the right knowledge to the right user, and using this knowledge to improve organizational and/or individual performance. KM success is measured by means of the dimensions: impact on business processes, impact on strategy, leadership, and knowledge content.” Some areas of disagreement are in further need of discussion: •
•
• •
KM success and effectiveness are likely the same and will be able to use the same measures. KM and KMS success are essentially the same (in deference to the authors and consistent with a Churchman view of a KMS and DeLone and McLean (DeLone and McLean, 1992 and 2003)). The role of learning and firm performance in KM success. The role of outcome measures such as speed, accuracy, amount of knowledge stored and used, etc. in KM and KMS success.
It is expected that it will take a great deal of research before consensus is reached on what KM and KMS success is. It is concluded that these findings from an exploratory survey are a good starting point for this discussion.
11
Towards a Consensus Knowledge Management Success Definition
referenceS Alavi, M., & Leidner, D. E. (1999). Knowledge Management Systems: Emerging Views and Practices from the Field. In Proceedings of the 32nd Hawaii International Conference on System Sciences. Hawaii, USA: IEEE Computer Society Press. Bals, C., Smolnik, S., & Riempp, G. (2007). Assessing User Acceptance of a Knowledge Management System in a Global Bank: Process Analysis and Concept Development. In Proceedings of the 40th Hawaii International Conference on System Sciences. Hawaii, USA: IEEE Computer Society Press. Chan, I., & Chau, P. Y. K. (2005). Getting Knowledge Management Right: Lessons from Failure. International Journal of Knowledge Management, 1(3), 40–54. Churchman, C. W. (1979). The Systems Approach (revised and updated). New York: Dell Publishing. Cross, R., & Baird, L. (2000). Technology Is Not Enough: Improving Performance by Building Organizational Memory. Sloan Management Review, 41(3), 41–54. Davenport, T. H., DeLong, D. W., & Beers, M. C. (1998). Successful Knowledge Management Projects. Sloan Management Review, 39(2), 43–57. DeLone, W. H., & McLean, E. R. (1992). Information Systems Success: The Quest for the Dependent Variable. Information Systems Research, 3, 60–95. doi:10.1287/isre.3.1.60 DeLone, W. H., & McLean, E. R. (2003). The DeLone and McLean Model of Information Systems Success: A Ten-Year Update. Journal of Management Information Systems, 19(4), 9–30.
12
Ginsberg, M., & Kambil, A. (1999). Annotate: A Web-based Knowledge Management Support System for Document Collections. In Proceedings of the 32nd Hawaii International Conference on System Sciences. Hawaii, USA: IEEE Computer Society Press. Goh, S. C. (2002). Managing Effective Knowledge Transfer: An Integrative Framework and Some Practice Implications. Journal of Knowledge Management, 6(1), 23–30. doi:10.1108/13673270210417664 Henselewski, M., Smolnik, S., & Riempp, G. (2006). Evaluation of Knowledge Management Technologies for the Support of Technology Forecasting. In Proceedings of the 39th Hawaii International Conference on System Sciences. Hawaii, USA: IEEE Computer Society Press. Holsapple, C. W., & Joshi, K. D. (2000). An Investigation of Factors that Influence the Management of Knowledge in Organizations. The Journal of Strategic Information Systems, 9, 235–261. doi:10.1016/S0963-8687(00)00046-9 Jennex, M. E. (2005). What is Knowledge Management? International Journal of Knowledge Management, 1(4), 1–5. Jennex, M. E., & Olfman, L. (2000). Development Recommendations for Knowledge Management/ Organizational Memory Systems. In Proceedings of Information Systems Development Conference. Jennex, M. E., & Olfman, L. (2005). Assessing Knowledge Management Success. International Journal of Knowledge Management, 1(2), 33–49. Jennex, M. E., & Olfman, L. (2006). A Model of Knowledge Management Success. International Journal of Knowledge Management, 2(3), 51–68.
Towards a Consensus Knowledge Management Success Definition
Jennex, M. E., Smolnik, S., & Croasdell, D. (2007). Towards Defining Knowledge Management Success. In Proceedings of the 40th Hawaii International Conference on System Sciences. Hawaii, USA: IEEE Computer Society Press. Lam, W., & Chua, A. (2005). Knowledge Management Project Abandonment: An Explanatory Examination of Root Causes. Communications of the Association for Information Systems, 16, 723–743. Malhotra, Y., & Galletta, D. (2003). Role of Commitment and Motivation as Antecedents of Knowledge Management Systems Implementation. In Proceedings of the 36th Hawaii International Conference on System Sciences. Hawaii, USA: IEEE Computer Society Press. McDermott, R., & O’Dell, C. (2001). Overcoming Cultural Barriers to sharing Knowledge. Journal of Knowledge Management, 5(1), 76–85. doi:10.1108/13673270110384428
Storey, J., & Barnett, E. (2000). Knowledge management Initiatives: Learning from Failure. Journal of Knowledge Management, 2(4), 145–156. doi:10.1108/13673270010372279 Turban, E., & Aronson, J. E. (2001). Decision Support Systems and Intelligent Systems (6th ed.). Upper Saddle River, NJ: Prentice Hall. Yu, S.-H., Kim, Y.-G., & Kim, M.-Y. (2004). Linking Organizational Knowledge Management Drivers to Knowledge Management Performance: An Exploratory Study. In Proceedings of the 37th Hawaii International Conference on System Sciences. Hawaii, USA: IEEE Computer Society Press. Zolingen, S. J., Van, Streumer, J. N., & Stooker, M. (2001). Problems in Knowledge Management: A Case-Study of a Knowledge-Intensive Company. International Journal of Training and Development, 5(3), 168–184. doi:10.1111/14682419.00130
Sage, A. P., & Rouse, W. B. (1999). Information Systems Frontiers in Knowledge Management. Information Systems Frontiers, 1(3), 205–219. doi:10.1023/A:1010046210832
13
14
Chapter 2
A Model of Knowledge Management Success Murray E. Jennex San Diego State University, USA Lorne Olfman Claremont Graduate University, USA
abStract This chapter describes a knowledge management (KM), Success Model that is derived from observations generated through a longitudinal study of KM in an engineering organization, KM success factors found in the literature, and modified by the application of these observations and success factors in various projects. The DeLone and McLean (1992, 2003) IS Success Model was used as a framework for the model as it was found to fit the observed success criteria and it provided an accepted theoretical basis for the proposed model.
introduction Knowledge Management, KM, and Knowledge Management System, KMS, success is an issue needing to be explored. The Knowledge Management Foundations workshop held at the Hawaii International Conference on System Sciences in January 2006 discussed this issue and reached agreement that it is important for the credibility of the KM discipline that we be able to define KM success. Also, Turban and Aronson (2001) list three reasons for measuring the success of KM and a KMS: DOI: 10.4018/978-1-60566-709-6.ch002
• • •
To provide a basis for company valuation To stimulate management to focus on what is important To justify investments in KM activities.
All are good reasons from an organizational perspective. Additionally, from the perspective of KM academics and practitioners, identifying the factors/constructs/variables that define KM success is crucial to understanding how these initiatives and systems should be designed and implemented. It is the purpose of this paper to present a model that specifies and describes the antecedents of KM/KMS success so that researchers and practitioners can predict if a specific
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
A Model of Knowledge Management Success
KM/KMS initiative will be successful. The paper assumes that KM and KMS success cannot be separated; this is based on a broad, Churchman view of what constitutes a KMS and a definition of success that is not reliant solely on technical effectiveness. The other basic assumption for this paper is that success and effectiveness, as used in the KM literature, are synonymous terms. The remainder of the paper uses the term KM to refer to KM and KMS and success to refer to success and effectiveness. The reasoning for these assumptions is discussed later in the paper. The proposed KM Success Model is an explication of the widely accepted DeLone and McLean IS Success Model, which was used as it was able to be modified to fit the observations and data collected in a longitudinal study of Organizational Memory, OM, and KM, it fit success factors found in the KM literature, and the resulting KM Success Model was useful in predicting success when applied to the design and implementation of a KM initiative and/or a KMS. Additionally, the stated purpose of the DeLone and McLean (1992, 2003) IS Success Model is to be a generalized framework describing success dimensions that researchers can adapt and define specific contexts of success (DeLone and McLean, 2003). Before presenting the KM Success Model we will discuss the concepts of knowledge, KM, KMS, and KM/KMS success. We will then briefly discuss the DeLone and McLean (1992, 2003) IS Success Model, present the KM Success Model, and discuss the differences. We will conclude by summarizing studies that support the KM Success Model and present operationalizations that can be used to evaluate the constructs used to define the KM Success Model dimensions.
Knowledge, oM, and KM Alavi and Leidner (2001) summarize and extend the significant literature relating to knowledge, knowledge management, and knowledge management systems. They view organizational knowl-
edge and OM as synonymous labels as do Jennex and Olfman (2002). This is useful as it allows for the combination of research results from OM and KM. It is also born out in the literature. Huber, Davenport, and King (1998) summarize OM as the set of repositories of information and knowledge that the organization has acquired and retains. Stein and Zwass (1995) define OM as the means by which knowledge from the past is brought to bear on present activities resulting in higher or lower levels of organizational effectiveness, and Walsh and Ungson (1991) define OM as stored information from an organization’s history that can be brought to bear on present decisions. Davenport and Prusak (1998) define knowledge as an evolving mix of framed experience, values, contextual information and expert insight that provides a framework for evaluating and incorporating new experiences and information. Knowledge often becomes embedded in documents or repositories and in organizational routines, processes, practices, and norms. Knowledge is also about meaning in the sense that it is context-specific (Huber, Davenport, and King, 1998). Jennex (2006) extends the concepts of context to also include associated culture that provides frameworks for understanding and using knowledge. Ultimately we conclude that knowledge contains information, but information is not necessarily knowledge. Also, we conclude that OM contains knowledge. However, for the sake of simplicity, we will use the term knowledge to refer to OM and knowledge throughout this paper. Various knowledge taxonomies exist. Alavi and Leidner (2001) and Jennex and Croasdell (2005) found that the most commonly used taxonomy is Polyani’s (1964 and 1967) and Nonaka’s (1994) dimensions of tacit and explicit knowledge. This paper uses this taxonomy for knowledge. Tacit knowledge is that which is understood within a knower’s mind. It consists of cognitive and technical components. Cognitive components are the mental models used by the knower and which cannot be directly expressed by data or
15
A Model of Knowledge Management Success
knowledge representations. Technical components are concrete concepts that can be readily expressed. Explicit knowledge also consists of these technical components that can be directly expressed by knowledge representations. Knowledge management, KM, in an organization occurs when members of an organization pass tacit and explicit knowledge to each other. Information Technology, IT, assists KM by providing knowledge repositories and methods for capturing and retrieving knowledge. The extent of the dimension of the knowledge being captured limits the effectiveness of IT in assisting KM. IT works best with knowledge that is primarily in the explicit dimension. Knowledge that is primarily in the tacit dimension requires that more context be captured with the knowledge where context is the information used to explain what the knowledge means and how it is used. Managing tacit knowledge is more difficult to support using IT solutions. Jennex (2005) looked at what KM is and found no consensus definition. However, using the review board of the International Journal of Knowledge Management as an expert panel and soliciting definitions of KM that were used by the board members, the following working definition is used to define KM for this paper: “KM is the practice of selectively applying knowledge from previous experiences of decision making to current and future decision making activities with the express purpose of improving the organization’s effectiveness….(Jennex, 2005)” KM is an action discipline; knowledge needs to be used and applied for KM to have an impact. We also need measurable impacts from knowledge reuse for KM to be successful. Decision making is something that can be measured and judged. Organizations can tell if they are making the same decisions over and over and if they are using past knowledge to make these decisions quicker and better. Also, decision making is the ultimate application of knowledge. This working defini-
16
tion provides this direction for KM and leads to a description of success for KM as being able to provide the appropriate knowledge for decision making when it is needed to those that need it.
Knowledge Management Systems Alavi and Leidner (2001, p. 114) defined a KMS as “IT (Information Technology)-based systems developed to support and enhance the organizational processes of knowledge creation, storage/ retrieval, transfer, and application.” They observed that not all KM initiatives will implement an IT solution, but they support IT as an enabler of KM. Maier (2002) expanded on the IT concept for the KMS by calling it an ICT (Information and Communication Technology) system that supported the functions of knowledge creation, construction, identification, capturing, acquisition, selection, valuation, organization, linking, structuring, formalization, visualization, distribution, retention, maintenance, refinement, evolution, accessing, search, and application. Stein and Zwass (1995) define an Organizational Memory Information System (OMS) as the processes and IT components necessary to capture, store, and apply knowledge created in the past on decisions currently being made. Jennex and Olfman (2004) expanded this definition by incorporating the OMS into the KMS and adding strategy and service components to the KMS. We expand the boundaries of a KMS by taking a Churchman view of a system. Churchman (1979, p. 29) defines a system as “a set of parts coordinated to accomplish a set of goals;” and that there are five basic considerations for determining the meaning of a system: • • • • •
System objectives, including performance measures System environment System resources System components, their activities, goals and measures of performance System management.
A Model of Knowledge Management Success
Churchman (1979) also noted that systems are always part of a larger system and that the environment surrounding the system is outside the system’s control, but influences how the system performs. The final view of a KMS is as a system that includes IT/ICT components, repositories, users, processes that use and/or generate knowledge, knowledge, knowledge use culture, and the KM initiative with its associated goals and measures. This final definition is important as it makes the KMS an embodiment of the KM initiative and making it possible to associate KM success with KMS success.
KM Success The above paragraphs define KM success as reusing knowledge to improve organizational effectiveness by providing the appropriate knowledge to those that need it when it is needed. KM is expected to have a positive impact on the organization that improves organizational effectiveness. DeLone and McLean (1992, 2003) use the terms success and effectiveness interchangeably. This paper uses KM success and KM effectiveness interchangeably by implying that increasing decision making effectiveness has a positive impact on the organization resulting in successful KM. KM and KMS success is also used interchangeably. KMS success can be defined as making KMS components more effective by improving search speed, accuracy, etc. As an example, a KMS that enhances search and retrieval functions enhances decision making effectiveness by improving the ability of the decision maker to find and retrieve appropriate knowledge in a more timely manner. The implication is that by increasing KMS effectiveness, KMS success is enhanced and decision making capability is enhanced leading to positive impacts on the organization. This is how KM success is defined and it is concluded that enhancing KMS effectiveness makes the KMS more successful as well as being a reflection of KM success.
delone and Mclean iS Success Model In 1992 DeLone and McLean published their seminal work proposing a taxonomy and interactive model for conceptualizing and operationalizing IS Success (DeLone and McLean 1992). The DeLone and McLean (D&M) IS Success Model is based on a review and integration of 180 research studies that used some form of system success as a dependent variable. The model identifies six interrelated dimensions of success as shown in Figure 1. Each dimension can have measures for determining their impact on success and each other. Jennex, et al. (1998) adopted the generic framework of the D&M IS Success Model and customized the dimensions to reflect the System Quality and Use constructs needed for an organizational memory information system, OMS. Jennex and Olfman (2002) expanded this OMS Success Model to include constructs for Information Quality. DeLone and McLean (2003) revisited the D&M IS Success Model by incorporating subsequent IS Success research and addressing criticisms of the original model. 144 articles from refereed journals and 15 papers from the International Conference on Information Systems, ICIS, citing the D&M IS Success Model were reviewed with 14 of these articles reporting on studies that attempted to empirically investigate the model. The result of the article is the modified D&M IS Success Model shown in Figure 2. Major changes include the additions of a Service Quality dimension for the service provided by the IS group, the modification of the Use dimension into a Intent to Use dimension, the combination of the Individual and Organizational Impact dimensions into an overall Net Benefits dimension, and the addition of a feedback loop from Net Benefits to Intent to Use and User Satisfaction. This paper modifies the Jennex and Olfman OMS Success Model into a KM Success Model by applying KM research and the modified D&M IS Success Model.
17
A Model of Knowledge Management Success
Figure 1. DeLone and McLean’s IS Success Model (1992)
Figure 2. DeLone and McLean’s Revisited IS Success Model (2003)
KM Success Model The model developed in this paper was initially proposed by Jennex, et al. (1998) after an ethnographic case study of KM in an engineering organization. The model was modified by Jennex and Olfman (2002) following a five year longitudinal study of knowledge management in an engineering organization and is based on the DeLone and McLean (2003) revised IS Success Model. This final model was developed to incorporate experience in using the model to design KMS and for incorporating other KM/KMS success factor research from the literature. Figure 3 shows the KM Success Model. The KM Success Model is based on DeLone and McLean (2003). Since the KM Success Model is assessing the use of orga-
18
nizational knowledge, the Information Quality dimension is renamed the Knowledge Quality dimension. Also, because use of a KMS is usually voluntary, the KM Success Model expanded the Intention to Use dimension to include a Perceived Benefit dimension based on Thompson, Higgins, and Howell’s (1991) Perceived Benefit model used to predict system usage when usage is voluntary. Finally, since KM strategy/process is key to having the right knowledge, the feedback loop is extended back to this dimension. Dimension descriptions of the model follow.
System Quality Jennex and Olfman (2000, 2002) found infrastructure issues such as using a common network
A Model of Knowledge Management Success
Figure 3. KM Success Model
structure, adding KM skills to the technology support skill set, using high end personal computers, integrated databases; and standardizing hardware and software across the organization key to building KM. The System Quality dimension incorporates these findings and defines system quality by how well KM performs the functions of knowledge creation, storage/retrieval, transfer, and application; how much of the knowledge is represented in the computerized portion of the OM, and the KM infrastructure. Three constructs: the technological resources of the organization, KM form, and KM level are identified. Technological resources define the capability of an organization
to develop, operate, and maintain KM. These include aspects such as amount of experience available for developing and maintaining KM, the type of hardware, networks, interfaces, and databases used to hold and manipulate knowledge, capacities and speeds associated with KM infrastructure, and the competence of the users to use KM tools. Technical resources enable the KM form and KM level constructs. KM form refers to the extent to which the knowledge and KM processes are computerized and integrated. This includes how much of the accessible knowledge is on line and available through a single interface and how integrated
19
A Model of Knowledge Management Success
the processes of knowledge creation, storage/ retrieval, transfer, and application are automated and integrated into the routine organizational processes. This construct incorporates concerns from the integrative and adaptive effectiveness clusters proposed for KMS effectiveness use by Stein and Zwass (1995). This construct along with the technological resources construct influences the KM level construct. KM level refers to the ability to bring knowledge to bear upon current activities. This refers explicitly to the KM mnemonic functions such as search, retrieval, manipulation, and abstraction; and how well they are implemented. The technological resources and form of the KMS influence this construct in that the stronger the technical resources and the more integrated and computerized knowledge is, the more important this construct is and the more effective it can be. Additional support for these constructs come from Alavi and Leidner (1999) who found it important to have an integrated and integrative technology architecture that supports database, communication, and search and retrieval functions. Davenport, et al. (1998) who found technical infrastructure to be crucial to effective KM. Ginsberg and Kambil (1999) who found knowledge representation, storage, search, retrieval, visualization, and quality control to be key technical issues. Mandviwalla, et al. (1998) who described technical issues affecting KMS design include knowledge storage/repository considerations, how information and knowledge is organized so that it can be searched and linked to appropriate events and use, and processes for integrating the various repositories and for re-integrating information and knowledge extracted from specific events, and access locations as users rarely access the KMS from a single location (leads to network needs and security concerns). Sage and Rouse (1999) who identified infrastructure for capturing, searching, retrieving, and displaying knowledge and an understood enterprise knowledge structure as important. Finally, several of the KMS classifi-
20
cations focus on KM support tools, architecture, or life cycle, all requiring strong system quality. Ultimately, given the effectiveness of information technology to provide search, storage, retrieval, and visualization capabilities rapidly, it is expected that a more fully computerized system utilizing network, semantic web, and data warehouse technologies will result in the highest levels of system quality.
Knowledge Quality Jennex and Olfman (2000, 2002) identified that having a KM process and an enterprise wide knowledge infrastructure, incorporating KM processes into regular work practices, and that knowledge needs were different for users of different levels were key issues to determining and implementing what is the right knowledge for KM to capture. Additionally it was found that KM users have formal and/or informal drivers that guide them in selecting information and knowledge to be retained by KM and formal and informal processes for reviewing and modifying stored information and knowledge. The Knowledge Quality dimension incorporates this and ensures that the right knowledge with sufficient context is captured and available for the right users at the right time. Three constructs: the KM strategy/process, knowledge richness, and linkages between knowledge components are identified. The KM strategy/process construct looks at the organizational processes for identifying knowledge users and knowledge for capture and reuse, the formality of these processes including process planning, and the format and context of the knowledge to be stored. This construct determines the contents and effectiveness of the other two constructs. Richness reflects the accuracy and timeliness of the stored knowledge as well as having sufficient knowledge context and cultural context to make the knowledge useful. Linkages reflect the knowledge and topic maps and/or listings of expertise available to identify sources of knowledge to users in the organization.
A Model of Knowledge Management Success
Hansen et al (1999) describes two types of knowledge strategy, personification and codification and warns of trying to follow both strategies equally at the same time. These strategies refer to how knowledge is captured, represented, retrieved, and used. However, KM strategy/process also needs to reflect that the knowledge needs of the users change over time as found by the longitudinal study (Jennex and Olfman, 2002) and that new users have a hard time understanding codified tacit knowledge (Koskinen, 2001). As an example new users will follow personification until they understand the context in which knowledge is captured and used, and then they are willing to switch to a codification strategy. Personification corresponds to “linkages” in the model shown in Figure 3, and refers to the situation where new users initially feel more comfortable seeking knowledge contexts from recognized human experts on a particular subject. Following this phase, these users tend to switch to codified knowledge; thus, codification corresponds to “richness” in the model. Additionally, Brown, et al. (2006) found that as the procedural complexity and teachability of knowledge increased the tendency of users to rely on linkages (person to person knowledge transfer) also increased. Jennex (2006) discuses the impact of context and culture on knowledge reuse and the conclusion is that as knowledge complexity grows, the ability to capture the context and culture information needed to ensure the knowledge is usable and used correctly becomes more difficult and the richness of the stored knowledge is less able to meet this need resulting in users shifting to using linkages and personification. This model disagrees with Hansen et al’s (1999) finding that organizations need to select a single strategy to concentrate on while using the other strategy in a support role by recognizing that both strategies will exist and that they may be equal in importance. Additional support for these constructs comes from Barna (2003) who identified creating a standard knowledge submission process,
methodologies and processes for the codification, documentation, and storage of knowledge, processes for capturing and converting individual tacit knowledge into organizational knowledge as important. Cross and Baird (2000) who found that for KM to improve business performance it had to increase organizational learning by supporting personal relationships between experts and knowledge users, providing distributed databases to store knowledge and pointers to knowledge, providing work processes for users to convert personal experience into organizational learning, and providing direction to what knowledge the organization needs to capture and learn from. Davenport, et al. (1998) who identified three key success factors for KM strategy/process as clearly communicated purpose/goals, multiple channels for knowledge transfer, and a standard, flexible knowledge structure. Mandviwalla, et al. (1998) who described several strategy issues affecting KM design. These include the KM focus (who are the users), the quantity of knowledge to be captured and in what formats; who filters what is captured, what reliance and/or limitations are placed on the use of individual memories, how long the knowledge is useful, and the work activities and processes that utilize KM. Sage and Rouse (1999) who identified modeling processes to identify knowledge needs and sources, KM strategy for the identification of knowledge to capture and use and who will use it, an understood enterprise knowledge structure, and clear KM goals as important.
Service Quality The Service Quality dimension ensures that KM has adequate support for users to utilize KM effectively. Three constructs, management support, user KM service quality, and IS KM service quality, are identified. Management support refers to the direction and support an organization needs to provide to ensure that adequate resources are allocated to the creation and maintenance of KM,
21
A Model of Knowledge Management Success
a knowledge sharing and using organizational culture is developed, encouragement, incentives, and direction is provided to the work force to encourage KM use, knowledge reuse, and knowledge sharing; and that sufficient control structures are created in the organization to monitor knowledge and KM use. This construct enables the other two constructs. User KM service quality refers to the support provided by user organizations to help their personnel utilize KM. This support consists of providing training to their users on how to use KM, how to query KM, and guidance and support for making knowledge capture, knowledge reuse, and KM use part of routine business processes. IS KM service quality refers to the support provided by the IS organization to KM users and to maintaining KM. This support consists of building and maintaining KM tools and infrastructure, maintaining the knowledge base, building and providing knowledge maps of the databases, and ensuring the reliability, security, and availability of KM. Our previous KM success model versions included the above constructs as part of the system quality and knowledge quality dimensions. These constructs were extracted from these dimensions in order to generate the constructs for the service quality dimension and to ensure the final KM success model was consistent with DeLone and McLean (2003). Additional support for these constructs come from Alavi and Leidner (1999) who found organizational and cultural issues associated with user motivation to share and use knowledge to be the most significant. Barna (2003) who identified the main managerial success factor as creating and promoting a culture of knowledge sharing within the organization by articulating a corporate KM vision, rewarding employees for knowledge sharing and creating communities of practice. Other managerial success factors include obtaining senior management support, creating a learning organization, providing KM training; precisely defining KM project objectives, and creating
22
relevant and easily accessible knowledge-sharing databases and knowledge maps. Cross and Baird (2000) who found that for KM to improve business performance it had to increase organizational learning by supporting personal relationships between experts and knowledge users, and providing incentives to motivate users to learn from experience and to use KM. Davenport, et al. (1998) who found senior management support, motivational incentives for KM users, and a knowledge friendly culture critical issues. Ginsberg and Kambil (1999) who found incentives to share and use knowledge to be the key organizational issues. Holsapple and Joshi (2000) who found leadership and top management commitment/ support to be crucial. Resource influences such as having sufficient financial support and skill level of employees were also important. Malhotra and Galletta (2003) who identified the critical importance of user commitment and motivation but found that using incentives did not guarantee a successful KMS. Sage and Rouse (1999) who identified incentives and motivation to use KM, clear KM goals, and measuring and evaluating the effectiveness of KM as important. Yu, et al. (2004) who determined that KM drivers such as a learning culture, knowledge sharing intention, rewards, and KM team activity significantly affected KM performance
User Satisfaction The User Satisfaction dimension is a construct that measures satisfaction with KM by users. It is considered a good complementary measure of KM use as desire to use KM depends on users being satisfied with KM. User satisfaction is considered a better measure for this dimension then actual KM use as KM may not be used constantly yet still be considered effective. Jennex (2005) found that some KM repositories or knowledge processes, such as email, may be used daily while others may be used once a year or less. However, it was also found that the importance of the once
A Model of Knowledge Management Success
a year use might be greater than that of the daily use. This makes actual use a weak measure for this dimension given that the amount of actual use may have little impact on KM success, as long as KM is used when appropriate, and supports DeLone and McLean (2003) in dropping amount of use as a measurement of success.
Intent to Use/Perceived Benefit The Intent to Use/Perceived Benefit dimension is a construct that measures perceptions of the benefits of KM by users. It is good for predicting continued KM use when KM use is voluntary, and amount and/or effectiveness of KM use depends on meeting current and future user needs. Jennex and Olfman (2002) used a perceived benefit instrument adapted from Thompson, Higgins, and Howell (1991) to measure user satisfaction and predict continued intent to use KM when KM use was voluntary. Thompson, Higgins, and Howell’s (1991) perceived benefit model utilizes Triandis’ (1980) theory that perceptions on future consequences predict future actions. This construct adapts the model to measure the relationships between social factors concerning knowledge use, perceived KM complexity, perceived near-term job fit and benefits of knowledge use, perceived long-term benefits of knowledge use, and fear of job loss with respect to willingness to contribute knowledge. Malhotra and Galletta (2003) created an instrument for measuring user commitment and motivation that is similar to Thompson, Higgins, and Howell’s (1991) perceived benefit model but based on self-determination theory that uses the Perceived Locus of Causality that may also be useful for predicting intent to use. Additionally, Yu, et al. (2004) found that KM drivers such as knowledge sharing intention significantly affected KM performance.
Net Impact An individual’s use of KM will produce an impact on that person’s performance in the workplace. In addition, DeLone and McLean (1992) note that an individual ‘impact’ could also be an indication that an information system has given the user a better understanding of the decision context, has improved his or her decision-making productivity, has produced a change in user activity, or has changed the decision maker’s perception of the importance or usefulness of the information system. Each individual impact should have an effect on the performance of the whole organization. Organizational impacts usually are not the summation of individual impacts, so the association between individual and organizational impacts is often difficult to draw. DeLone and McLean (2003) recognized this difficulty and combined all impacts into a single dimension. Davenport, et al. (1998) overcame this by looking for the establishment of linkages to economic performance. Alavi and Leidner (1999) also found it important to measure the benefits of KM as did Jennex and Olfman (2000). We agree with combining all impacts into one dimension and the addition of the feedback loop to the User Satisfaction and Intent to Use/ Perceived Benefit dimensions but take it a step further and extend the feedback loop to include the KM Strategy/Process construct. Jennex and Olfman (2002) showed this feedback in their model relating KM, OM, organizational learning, and effectiveness shown in Figure 4. This model recognizes that the use of knowledge may have good or bad benefits. It is feedback from these benefits that drives the organization to either use more of the same type of knowledge or to forget the knowledge and which also provides users with feedback on the benefit of the KMS. Alavi and Leidner (2001) also agree that KM should allow for forgetting of some knowledge when it has no or detrimental benefits. To ensure this is done
23
A Model of Knowledge Management Success
Figure 4. The OM/KM Model
feedback on the value of stored knowledge needs to be fed into the KM Strategy/Process construct.
operationalization of the Success Model Jennex and Olfman (2002) performed a longitudinal study of KM in an engineering organization that identified a link between knowledge use and improved organizational effectiveness. Although a great deal of quantitative data was taken, it was not possible to quantify productivity gains as a function of knowledge use. KM was found to be effective and to have improved in effectiveness over a five-year period. Additionally, the engineers were found to be more productive. Jennex (2000) applied an early version of this model to the construction and implementation of a knowledge management website for assisting a virtual project team. It was found that applying the model to the design of the site resulted in the project going from lagging to a leading project in just a few months.
24
Hatami et al. (2003) used the KM Success Model to analyze knowledge reuse and the effectiveness of decision-making. They found the model useful in explaining the effects of culture and knowledge needs on the overall KM success. Jennex, Olfman, and Addo (2003) investigated the need for having an organizational KM strategy to ensure that knowledge benefits gained from projects are captured for use in the organization. They found that benefits from Y2K projects were not being captured because the parent organizations did not have a KM strategy/process. Their conclusion was that KM in projects can exist and can assist projects in utilizing knowledge during the project. However, it also led to the conclusion that the parent organization will not benefit from project based KM unless the organization has an overall KM strategy/process. The following discussion combines these studies to provide methods of operationalizing the constructs proposed previously. Table 1 summarizes the various measures applied in these studies.
A Model of Knowledge Management Success
Table 1. KMS success model data collection methods Construct
Data Collection Method
Technical Resources
User competency survey, observation and document research of IS capabilities, interview with IS Manager on infrastructure
Form of KMS
Interviews and survey of knowledge sources and form
Level of KMS
Survey of satisfaction with retrieval times, usability testing on KMS functions
KM Strategy/Process
Survey on drivers for putting knowledge into the KMS and for satisfaction with the knowledge in the KMS, check on if a formal strategy/process exists
Richness
Usability test on adequacy of stored knowledge and associated context, interviews and satisfaction survey on adequacy of knowledge in KMS
Linkages
Usability test on adequacy of stored linkages, interviews and satisfaction surveys on satisfaction with linkages stored in KMS
Management Support
Interviews and Social Factors construct of Thompson, Higgins, and Howell’s survey on perceived benefit
IS KM Service Quality
Interview with IS Manager on IS capabilities. Interviews with users on needs and capabilities. Suggest adding user satisfaction survey on service issues
User Organization KM Service Quality
Interview with user organization KM team on capabilities and responsibilities, and needs from IS. Interview with users on needs and capabilities. Suggest adding user satisfaction survey on service issues
User Satisfaction Intent to Use/ Perceived Benefit Net Impacts
Doll and. Torkzadeh (1988) End User Satisfaction Measure, any other user satisfaction measure Thompson, Higgins, and Howell’s (1991) survey on perceived benefit Determine Individual and Organizational productivity models through interviews, observation, tend to be specific to organizations
System Quality Three constructs were proposed for the system quality dimension: technical resources, KM form, and KM level. Jennex and Olfman (2002) found that the capabilities of the IS organization and the users can impact the success of KM. IS infrastructure and organizational capabilities that enhanced KM effectiveness included a fast, high capacity infrastructure, strong application development skills, network skills, and awareness of the user organization’s knowledge requirements. Users’ capabilities that enhanced KM effectiveness included a high degree of computer literacy and high-end personal computers. Given the importance of these technical resources, operationalization of the technical resources construct can be accomplished by focusing on the overall experience of the development group in building and maintaining networked systems that support
KM, the computer capabilities of KM end-users, and the quality of hardware, network, application, and operating system capabilities of workstations supporting KM. KM level was defined as the ability to bring past information to bear upon current activities. This can be measured in terms of Stein and Zwass’s (1995) mnemonic functions including knowledge acquisition, retention, maintenance, search, and retrieval. It is expected that more effective KM will include more sophisticated levels of these functions. For example, more sophisticated KM should contain the ability to do filtering, guided exploration, and to grow memory. Usability testing of these functions can serve as measure of how effective they are implemented. KM form refers to the extent to which knowledge is computerized and integrated. In essence, the more computerized the memory (personification and codification approaches), the more inte-
25
A Model of Knowledge Management Success
grated it can be. That is, if all knowledge sources are available in computer-based form, then it will be possible to more effectively search and retrieve knowledge. Integration also speaks to the external consistency of the various KM tools. Jennex and Olfman (2002) found that although much of the KM used by the engineering organization was computerized, there were many different KMS components, each with varying kinds of storage mechanisms and interfaces. These components were poorly integrated, relying mainly on the copy and paste features of the Windows interface, and therefore limited the ability of workers to utilize KM effectively. It was evident that more sophisticated technical resources could produce a more integrated set of components. Surveys of actual knowledge repositories used for KM can determine how much knowledge is stored in computerized forms. It is desired, but not practical, to have all knowledge in a computer. Assessment of this construct should focus on how much of the knowledge that is practical for computer storage is computerized.
Knowledge Quality Knowledge quality has three constructs, KM strategy/process, richness, and linkages. Jennex and Olfman (2002) used surveys of users to determine drivers for putting knowledge into KM repositories and user satisfaction with the knowledge that was in these repositories. Jennex, Olfman, and Addo (2003) surveyed organizations to determine if they had a KM strategy and how formal it was. Jennex and Olfman (2002) used interviews of KM users to determine their satisfaction with the accuracy, timeliness, and adequacy of available knowledge. The need for linkages and personification of knowledge was found through interviews with users on where they went to retrieve knowledge. Additionally, it was found that users’ KM needs vary depending on their experience level in the organization. Context of the knowledge is critical. New members did not have this context and the
26
knowledge repositories did not store sufficient context for a new member to understand and use the stored knowledge. It was found that new members need linkages to the human sources of knowledge. It is not expected that KM will ever be able to do an adequate job of storing context so it is recommended that KM store linkages to knowledge.
Service Quality Service quality was defined previously as how well the organization supports KM. Three constructs are proposed: management support, IS KM service quality, and user KM service quality. Jennex and Olfman (2002) identified these constructs through interviews that found evidence to show that the service quality of the IS and user organizations can impact KM success and that service quality was determined by the organizations possessing certain capabilities. IS KM service consisted of IS being able to build and maintain KM components and to map the knowledge base. IS organizational capabilities that enhanced this service effectiveness included data integration skills, knowledge representation skills, and awareness of the user organization’s knowledge requirements. User organization KM service consisted of incorporating knowledge capture into work processes and being able to identify key knowledge requirements. User organization KM capabilities that enhanced this service effectiveness included understanding and being able to implement KM techniques such as knowledge taxonomies, ontologies, and knowledge maps; and process analysis capabilities. Additionally, service was enhanced by either the IS or the user organization providing training on how to construct knowledge searches, where the knowledge was located, and how to use KM. The key construct, management support, was measured using interviews and the social factors measure of Thompson, Higgins, and Howell’s survey on perceived benefit. The social factors measure uses a likert scale survey to determine
A Model of Knowledge Management Success
perceptions of support from peers, supervisors, and managers and gives a good view as to the ability of the organizational culture to support KM and management support for doing KM. Additionally, individual and organizational productivity models were generated using interviews with managers that provide an assessment of the impact of knowledge use on individuals and organizations and what incentives are being used to encourage KM participation. IS organization KM support was measured by determining the overall experience of the development group in building and maintaining networked systems that support KM and the satisfaction of the KM end-users with this support. User organization KM support was measured by determining what support was provided and how satisfied the users were with it. Measures assessing specific areas of capability can be used should less than acceptable service satisfaction be found.
User Satisfaction User satisfaction is a construct that measures perceptions of KM by users. This is one of the most frequently measured aspects of IS Success, and it is also a construct with a multitude of measurement instruments. User satisfaction can relate to both product and service. As noted above, product satisfaction is often used to measure knowledge quality. Product satisfaction can be measured using the 12-item instrument developed by Doll and Tordzadeh (1988). This measure addresses satisfaction with content, accuracy, format, ease of use, and timeliness. Additionally, measures addressing satisfaction with interfaces should be used. Other user satisfaction measures can be used to assess the specific quality constructs as discussed in previous paragraphs.
Intent to Use/Perceived Benefit Jennex, et al. (1998) used Thompson, Higgins, and Howell’s (1991) Perceived Benefit Model
to predict continued voluntary usage of KM by the engineering organization. Four factors from the model plus one added by Jennex and Olfman were in the survey: • • • • •
Job fit of KM, near term consequences of using KM Job fit of KM, long term consequences of using KM Social factors in support of using KM Complexity of KM tools and processes. Fear of job loss for contributing knowledge to KM
All five factors were found to support continued KM use during the initial measurements. Jennex and Olfman (2002) found continued KM use throughout the five years of observing KM usage and concluded that the Perceived Benefit model was useful for predicting continued use. Jennex (2000) used these factors to design the site, work processes, and management processes for a virtual project team using web based KM to perform a utility Year 2000 project. Promoting the Social factors and providing near term job fit were critical in ensuring the virtual project team utilized KM. KM use was considered highly successful as the project went from performing in the bottom third of utility projects to performing in the top third of all utility projects.
Net Benefits The net benefits dimension looks for any benefits attributed to use of the KMS. We attempted to measure benefits associated with individual and organizational use of KM through the generation of productivity models which identified where knowledge use impacted productivity. KM benefits for an individual are found in their work processes. Jennex and Olfman (2002) queried supervisors and managers to determine what they believed were the nature of individual productivity in the context of the station-engineering work
27
A Model of Knowledge Management Success
process. The interviews revealed a complex set of factors. Those benefiting from KM include: • • • • • • •
Timeliness in completing assignments and doing them right the first time Number of assignments completed Identification and completion of high priority assignments Completeness of solutions Quality of solutions (thoroughness and accuracy) Complexity of the work that can be assigned to an engineer Client satisfaction.
While many of these factors are measured quantitatively, it was not possible to directly attribute changes in performance solely to KM use although improvements in performance were qualitatively attributed to KM use. Additionally, Jennex and Olfman (2002) asked 20 engineers to indicate whether they were more productive now than 5 or 10 years ago, and all but one thought they were. This improvement was primarily attributed to KM use but was also a qualitative assessment. Organizational impacts relate to the effectiveness of the organization as a whole. For a nuclear power plant, specific measures of effectiveness were available. These measures relate to assessments performed by external organizations, as well as those performed internally. External assessments were found to be the most influenced by KM use. Jennex and Olfman (2002) found measures such as the SALP, Systematic Assessment of Licensee Performance, Reports issued by the Nuclear Regulatory Commission and site evaluations performed by the Institute of Nuclear Power Operations, INPO. Review of SALP scores issued since 1988 showed an increase from a rating of 2 to a rating of 1 in 1996. This rating was maintained through the 5 years of the study. An INPO evaluation was conducted during the spring of 1996 and resulted in a 1 rating. This rating
28
was also maintained throughout the 5 years of the study. These assessments identified several strengths directly related to engineer productivity using KM. These include decision-making, root cause analysis, problem resolution, timeliness, and Operability Assessment documentation. This demonstrates a direct link between engineer productivity and organization productivity. Also, since organization productivity is rated highly, it can be inferred that engineer productivity is high. Two internal indicators were linked to KM use: unit capacity and unplanned automatic scrams. Unit capacity and unplanned scrams are influenced by how well the engineers evaluate and correct problems. Both indicators improved over time. These two indicators plus unplanned outages and duration of outages became the standard measure during the Jennex and Olfman (2002) study and reporting and monitoring of these factors significantly improved during the study. The conclusion is that net benefits should be measured using measures that are specific to the organization and are influenced by the use of KM. Suitable measures were found in all the studies used for this paper and it is believed they can be found for any organization.
concluSion The DeLone and McLean IS Success Model is a generally accepted model for assessing success of an IS. Adapting the model to KM is a viable approach to assessing KM success. The model presented in this paper meets the spirit and intent of DeLone and McLean (1992, 2003). Additionally, Jennex (2000) used an earlier version of the KM Success Model to design, build, and implement Intranet based KM that was found to be very effective and successful. The conclusion of this paper is that the KM Success Model is a useful model for predicting KM success. It is also useful for designing effective KM.
A Model of Knowledge Management Success
areaS for future reSearcH DeLone & McLean (1992, pp. 87-88) stated that “Researchers should systematically combine individual measures from the IS success categories to create a comprehensive measurement instrument”. This is the major area for future KM success research. Jennex and Olfman (2002) provided a basis for exploring a quantitative analysis and test of the KM Success Model. To extend this work, it is suggested that a survey instrument to assess the effectiveness of KM within other nuclear power plant engineering organizations in the United States be developed and administered. Since these organizations have similar characteristics and goals, they provide an opportunity to gain a homogeneous set of data to use for testing the model and to ultimately generate a generic set of KM success measures. Additionally, other measures need to be assessed for applicability to the model. In particular, the Technology Acceptance Model, Perceived Usefulness (Davis, 1989) should be investigated as a possible measure for Intent to Use/Perceived Benefit.
referenceS Alavi, M., & Leidner, D. E. (2001). Review: Knowledge Management and Knowledge Management Systems: Conceptual Foundations and Research Issues. Management Information Systems Quarterly, 25(1), 107–136. doi:10.2307/3250961 Barna, Z. (2003). Knowledge Management: A Critical E-Business Strategic Factor (Unpublished Master’s Thesis). San Diego State University. Brown, S. A., Dennis, A. R., & Gant, D. B. (2006). Understanding the Factors Influencing the Value of Person-to-Person Knowledge Sharing. 39th Hawaii International Conference on System Sciences, IEEE Computer Society.
Churchman, C. W. (1979). The Systems Approach (revised and updated). New York: Dell Publishing. Cross, R., & Baird, L. (2000). Technology Is Not Enough: Improving Performance by Building Organizational Memory. Sloan Management Review, 41(3), 41–54. Davenport, T. H., DeLong, D. W., & Beers, M. C. (1998). Successful Knowledge Management Projects. Sloan Management Review, 39(2), 43–57. Davenport, T. H., & Prusak, L. (1998). Working Knowledge. Boston, MA: Harvard Business School Press. Davis, F. D. (1989). Perceived Usefulness, Perceived Ease of Use, and User Acceptance of Information Technology. Management Information Systems Quarterly, 13, 319–340. doi:10.2307/249008 DeLone, W. H., & McLean, E. R. (1992). Information Systems Success: The Quest for the Dependent Variable. Information Systems Research, 3, 60–95. doi:10.1287/isre.3.1.60 DeLone, W. H., & McLean, E. R. (2003). The DeLone and McLean Model of Information Systems Success: A Ten-Year Update. Journal of Management Information Systems, 19(4), 9–30. Doll, W. J., & Torkzadeh, G. (1988). The Measurement of End-User Computing Satisfaction. Management Information Systems Quarterly, 12, 259–275. doi:10.2307/248851 Ginsberg, M., & Kambil, A. (1999). Annotate: A Web-based Knowledge Management Support System for Document Collections. In Proceedings of the 32nd Hawaii International Conference on System Sciences, IEEE Computer Society Press. Hansen, M. T., Nohria, N., & Tierney, T. (1999). What’s Your Strategy For Managing Knowledge? Harvard Business Review, (March-April): 106–116.
29
A Model of Knowledge Management Success
Hatami, A., Galliers, R. D., & Huang, J. (2003). Exploring the Impacts of Knowledge (Re) Use and Organizational Memory on the Effectiveness of Strategic Decisions: A Longitudinal Case Study. 36th Hawaii International Conference on System Sciences, IEEE Computer Society. Holsapple, C. W., & Joshi, K. D. (2000). An Investigation of Factors that Influence the Management of Knowledge in Organizations. The Journal of Strategic Information Systems, 9, 235–261. doi:10.1016/S0963-8687(00)00046-9 Huber, G. P., Davenport, T. H., & King, D. (1998). Some Perspectives on Organizational Memory, Unpublished Working Paper for the Task Force on Organizational Memory, In F. Burstein, G. Huber, M. Mandviwalla, J. Morrison, and L. Olfman, (eds.), Presented at the 31st Annual Hawaii International Conference on System Sciences. Jennex, M. E. (2000). Using an Intranet to Manage Knowledge for a Virtual Project Team, InternetBased Organizational Memory and Knowledge Management (Schwartz, D. G., Divitini, M., & Brasethvik, T., Eds.). Idea Group Publishing. Jennex, M. E. (2005). What is Knowledge Management? International Journal of Knowledge Management, 1(4), 1–5. Jennex, M. E. (2006). Culture, Context, and Knowledge Management. International Journal of Knowledge Management, 2(2), 1–5. Jennex, M. E., & Croasdell, D. (2005). Knowledge Management: Are We A Discipline? International Journal of Knowledge Management, 1(1), 1–5. Jennex, M. E., & Olfman, L. Pituma, P., and Yong-Tae, P., (1998). An Organizational Memory Information Systems Success Model: An Extension of DeLone and McLean’s I/S Success Model. In Proceedings of the 31st Annual Hawaii International Conference on System Sciences, IEEE Computer Society.
30
Jennex, M. E., & Olfman, L. (2000). Development Recommendations for Knowledge Management/ Organizational Memory Systems. In Proceedings of the Information Systems Development Conference. Jennex, M. E., & Olfman, L. (2002). Organizational Memory/Knowledge Effects on Productivity, A Longitudinal Study. In Proceedings of the 35th Annual Hawaii International Conference on System Sciences, IEEE Computer Society. Jennex, M. E., Olfman, L., & Addo, T. B. A. (2003). The Need for an Organizational Knowledge Management Strategy, 36th Hawaii International Conference on System Sciences, HICSS36, IEEE Computer Society. Koskinen, K. U. (2001). Tacit Knowledge as a Promoter of Success in Technology Firms. 34th Hawaii International Conference on System Sciences, IEEE Computer Society. Maier, R. (2002). Knowledge Management Systems: Information and Communication Technologies for Knowledge Management. Berlin: Springer-Verlag. Malhotra, Y., & Galletta, D. (2003). Role of Commitment and Motivation as Antecedents of Knowledge Management Systems Implementation. 36th Hawaii International Conference on System Sciences, IEEE Computer Society. Mandviwalla, M., Eulgem, S., Mould, C., & Rao, S. V. (1998). Organizational Memory Systems Design. Unpublished Working Paper for the Task Force on Organizational Memory, Burstein, F., Huber, G., Mandviwalla, M., Morrison, J., and Olfman, L. (eds.), Presented at the 31st Annual Hawaii International Conference on System Sciences. Nonaka, I. (1994). A Dynamic Theory of Organizational Knowledge Creation. Organization Science, 5(1), 14–37. doi:10.1287/orsc.5.1.14
A Model of Knowledge Management Success
Polanyi, M. (1964). Personal Knowledge: Toward a Post-Critical Philosophy. New York: Harper Torch Books. Polanyi, M. (1967). The Tacit Dimension. London: Routledge. Sage, A. P., & Rouse, W. B. (1999). Information Systems Frontiers in Knowledge Management. Information Systems Frontiers, 1(3), 205–219. doi:10.1023/A:1010046210832 Stein, E. W., & Zwass, V. (1995). Actualizing Organizational Memory With Information Systems. Information Systems Research, 6(2), 85–117. doi:10.1287/isre.6.2.85 Thompson, R. L., Higgins, C. A., & Howell, J. M. (1991). Personal Computing: Toward a Conceptual Model of Utilization. Management Information Systems Quarterly, 125–143. doi:10.2307/249443
Triandis, H. C. (1980). Beliefs, Attitudes, and Values. Lincoln, NE: University of Nebraska Press. Turban, E., & Aronson, J. E. (2001). Decision Support Systems and Intelligent Systems (6th ed.). Upper Saddle River, NJ: Prentice Hall. Walsh, J. P., & Ungson, G. R. (1991). Organizational Memory. Academy of Management Review, 16(1), 57–91. doi:10.2307/258607 Yu, S.-H., Kim, Y.-G., & Kim, M.-Y. (2004). Linking Organizational Knowledge Management Drivers to Knowledge Management Performance: An Exploratory Study. 37th Hawaii International Conference on System Sciences, HICSS36, IEEE Computer Society.
31
32
Chapter 3
Market Knowledge Management, Innovation and Product Performance: Survey in Medium and Large Brazilian Industrial Firms Cid Gonçalves Filho FUMEC University, Brazil Rodrigo Baroni de Carvalho FUMEC University, Brazil George Leal Jamil FUMEC University, Brazil
abStract In a business environment characterized by a high level of competitiveness, the impact of new products on an organization’s revenue is an important factor. This research was developed with the objective of examining empirically the relationships between market knowledge management, innovation and the performance of new products in the market. This chapter analyzes KM (Knowledge Management) success trough a market-oriented perspective because, at the end of the day, KM success must lead to better organizational performance. The research model was generated by the combination of market knowledge models and KM success and maturity models. By means of a survey, based on 387 medium and large industrial firms, and the use of structural equation modeling, the supremacy of the competitor knowledge management process over other constructs was verified, as the most important antecedent of new product performance in the market. The results also revealed that innovation was strongly impacted from technology knowledge management and customer knowledge management. DOI: 10.4018/978-1-60566-709-6.ch003
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
Market Knowledge Management, Innovation and Product Performance
introduction In a business environment characterized by a high level of competitiveness, the impact of new products on revenues is an important factor. Innovation consequences include firms’ innovativeness, their ability to create and implement new ideas, products and processes, and new product performance defined as the success of new products in terms of market share, sales, return on investment, and profitability (Im and Workman, 2004; Hult and Ketchen, 2001; Kirca et al., 2005). Innovation is usually described as a knowledge-intensive activity, involving the discovery, experimentation, and development of new technologies, services, production processes and organizational structures (Carneiro, 2000; Khalifa et al., 2008). In a post-industrial society, the growing perception of the strategic role of knowledge in innovation processes has contributed to the development of Knowledge Management (KM) initiatives. KM refers to identifying and leveraging the collective knowledge in an organization to help it compete (von Krogh, 1998). KM intends to be an area of research and practice that deepens the understanding of knowledge processes in organizations, and develops procedures and instruments to support the transformation of knowledge into economic and social progress (Carvalho and Ferreira, 2001). In fact, different aspects of these issues have been studied for decades in many different disciplines, and one of the most important contributions of the KM concept is creating a space (in academic and business world) where practitioners and scholars from different backgrounds may discuss and work together. KM is closely related to the organization’s capabilities of collecting, filtering, organizing and disseminating existing information and knowledge. The organizational knowledge strategy is usually a mix of exploitation and exploration (Choo and Bontis, 2002). Exploitation emphasizes knowledge codification and the reuse of existing knowledge. When exploitation is overemphasized,
the organization may diminish its capacity to innovate, resulting in obsolescence. On the other hand, exploration stimulates the creation of new knowledge, applying it to the development of products and services. When exploration is overemphasized, the organization reduces its ability to externalize knowledge and to convert it into organizational memory. Despite the quicker return on investment (ROI) of exploitation approach, the dynamic balance between exploration and exploitation seems to produce better results in a longer term, because innovation demands exploration. Furthermore, the collaborative development of strategy leverages a firm’s collective knowledge and capabilities, leading to more creative and realistic strategies (Gebhardt et al., 2006). This collaborative process also leads to higher commitment to the firm, which again increases the likelihood of success. Internal sources of organizational knowledge include business processes, databases and employees, while external sources consist of interorganizational processes, customers, business partners, market and competitive intelligence (Khalifa et al., 2008). Many of the existing studies in the KM field place more emphasis on organizational internal knowledge and its exploitation. This survey intended to discuss the exploration perspective that is related to market and customer knowledge, pushing the KM approach out of the boundaries of the firm. This complimentary orientation is justified because it was observed that the financial results of some firms have improved more than others in the same market segment. As a result of efficient market orientation and creative management of market knowledge, it is possible that consecutive releases of new products and services with a high level of market acceptance have contributed to this advantageous position. According to Martin et al. (2009), a market orientation is a strong source of sustainable competitive advantage because it is difficult to imitate, focuses the firm on finding
33
Market Knowledge Management, Innovation and Product Performance
oportunitties for growth, and reduces the time lag for responding to those opportunities. This chapter analyzes KM success trough a market-oriented perspective because, at the end of the day, KM success must lead to better organizational performance which is closely related to market results. Furthermore, competitive capacity and position in the market (sales, market share, brand equity), proximity to customers and customer satisfaction, and innovative ability and activity are considered typical outcomes of organizational performance (Jennex, Smolnik, and Croasdell, 2008). KM success includes the ability of obtaining knowledge about the environment and if a firm is not able to obtain market information, it will not be able to set strategies with efficiency and leverage performance. In a research developed by Jennex et al., (2007), it was observed that practitioners support the use of firm performance measures as indicators of KM success because they tend to favor definitions and measures that are objective and have an obvious impact on the organization, such as the new product performance discussed in this chapter. Nevertheless, some doubts arise whether the KM initiatives are successful or if KM is just another management fad. According to Verhoef and Leeflang (2009), short-term financial measures still dominate management functions to the detriment of strategic thinking and this pressure increases as the the economy suffers and global competition grows. To answer these questions, both researchers and practitioners have developed different approaches to understand and measure the impact of KM (Paulzen and Perc, 2002; Berztis, 2002; Anantatmula and Kanungo, 2005; Khalifa et al., 2008; Jennex et al., 2008). Some of these approaches will be detailed further in this chapter. Additionally, the marketing strategy literature posits that market orientation provides a firm with the market-sensing and customer-linking capabilities that lead to a superior organizational performance (Hult and Ketchen, 2001;Kirca et al. 2005). These points raised a fundamental
34
question: What is the impact of the processes of market knowledge management on innovation and market results of new products? The following section was organized to introduce the concept of market KM and to present some KM success measurement models.
bacKground Market orientation and Market KM At the beginning of the 1950’s, the concept of marketing and the philosophical foundations of the marketing orientation were introduced. Glazer (1991) considers market knowledge a company’s strategic resource. Aaker (1998), Aaker et al. (1998), Capon et al.(1992), Day (1999) and Geus (1997) observed that the competencies in associated with market knowledge can be the elements that generate competitive advantages in new products (Porter, 1995). Market orientation has been conceptualized from both behavioral and cultural perspectives (Kirca et al., 2005). The behavioral perspective concentrates on organizational activities that are associated to the generation, distribuition and responsiveness to market intelligence (Kohli and Jaworski, 1990). On the other hand, the cultural perspective places focus on organizational norms and values that encourage behaviors that are consistent with market orientation (Narver and Slater, 1990). Market orientation is a fundamental aspect of an organization’s culture that defines competitive value, norms, artifacts and behaviors that collectively create the opportunity for competitive advantage (Martin et al., 2009). Verhoef and Leeflang (2009) defines market orientation as a business culture that (1) places the highest priority on the profitable creation and maintenance of superior values for customers while considering the interest of other stakeholders and (2) provides norms of behaviors regarding market information. The cultural perspective is also present in some KM
Market Knowledge Management, Innovation and Product Performance
success models such as Lindsey (2002) and Ehms and Lagen (2002). Jennex and Olfman (2005) also included organizational culture and structure for learning, sharing and use of knowledge in a set of 12 KM critical success factors (CSF). Narver and Slater (1990) observed that a business should be capable of maintaining a culture able to generate behaviors oriented to the market in order to create superior value for the customers and to obtain sustained competitive advantage. In this connection, they defined market orientation as an organizational culture that aims to create and put into practice, in an efficient way, behaviors that generate value for the customers and consequently (Slywotzky, 1996), better results in the marketplace. Narver and Slater (1990) confirmed the hypothesis that market orientation has a significant impact on business performance. It therefore makes sense to hold that the orientation to customers, orientation to competitors and inter function coordination produce results for companies. Based on marketing orientation theory, Li and Calantone (1998) defined marketing knowledge as organized and structured information about the market. In this definition, organized means it is the result of systematic processing (as opposed to random picking) and structured implies that it is endowed with useful meaning (as opposed to discrete items of irrelevant data). The authors defined “Competence of Market Knowledge” as three processes that integrate and generate market knowledge. The three following processes are implemented as a series of activities that generate and integrate knowledge: management of customer knowledge; management of competitor knowledge; and marketing-Research & Development (R&D) interface. Although many organizations perceive the importance of market knowledge, there is a trend among managers to overemphasize one process while ignoring the others (Day and Wesley, 1988). Li and Calantone (1998) stated that such imbalanced practice might
result in fragmentary knowledge and weaken the effectiveness of a market KM system. The process of customer knowledge management refers to the group of activities and behaviors that generate knowledge about the current and potential needs of customers concerning new products. The process of competitor knowledge management involves behavioral activities that generate knowledge about competitors’ strategies and actions. The interface between marketing and R&D refers to the process by which the marketing and R&D areas communicate and cooperate with one another. Li and Calantone (1998) applied multivariate techniques to evaluate the following model (Figure 1): This study considered processes of knowledge management that were mainly focused on knowledge available in the market, so it did not consider new knowledge generated inside the organization from innovation. Li and Calantone (1998) also checked the influence of the processes of customer and competitor knowledge management on the competitive advantage of new products. However, the weights obtained (standardized betas) were 0.23 and 0.20 respectively, and, although relatively low, were significant. The findings also indicated that the perceived importance of market knowledge by top management had the largest impact on the process of market knowledge competence. One of the interesting results of Li and Calantone’s work (1998) was the evidence that the most innovative firms, called by them as first-movers due their short time to market, would be able to achieve better performance because they would have better and earlier access to customer information. Hurley and Hult (1998) carried out important research looking for causal relationships and antecedents of competitive advantage, with a special focus on innovation. In this research, characteristics of the organization concerning structures, processes and culture were proposed as antecedents of innovation and performance in the market (Figure 2).
35
Market Knowledge Management, Innovation and Product Performance
Figure 1. Li and Calantone’s (1998) research model
Figure 2. Hurley and Hult’s (1998) research model
36
Market Knowledge Management, Innovation and Product Performance
Hurley and Hult’s (1998) research, as well as that carried out by Li and Calantone (1998), demonstrated strong relations with the work of Narver and Slater (1990) and the marketing orientation.
KM Success and KM Maturity Models The knowledge-based view (KBV) highlights knowledge-intensive capabilities as the main drivers of organizational performance (Grant, 1996; Khalifa et al., 2008). Compared to a subject such as software engineering however, the domain of KM consists more of soft subjects to be considered. However, the existence of open standards and common approaches for KM will allow future work to start from a higher level, and the most arguments which are brought against standardization of KM can be classified as general concern against standardization (Weber et al., 2002). In a maturity model, the levels are characterized by specific requirements that have to be achieved on that level, and it is highly improbable to skip a level in an evolutionary process. An assumption of a maturity model is that more mature organizations have higher chances of obtaining success in their projects. For an immature organization, success might be a matter of trial and error, luck or as a result of a heroic effort. In other words, it is hard to expect clean water (organizational success) from rusty pipes (inefficient and immature processes). Two widely known approaches among KM practitioners are the APQC (American Productivity & Quality Center) Road Map to KM results, and the KMMM (KM Maturity Model) developed by Siemens. The APQC Road Map is a methodology to guide organizations through the five stages of KM implementation, with relevant advice concerning processes, structures, and enablers (Hubert and O’Dell, 2004). The APQC Road Map provides a qualitative evaluation of KM practices in their five following stages: getting started; explore and experiment; pilots and KM initiatives; expand and support; and institutionalize KM.
The KMMM provides qualitative and quantitative results, allowing a comprehensive assessment of the KM activities which covers eight key areas: strategy and knowledge goals; environment and partnerships; people and competencies; collaboration and culture; leadership and support; knowledge structures and knowledge forms; technology and infrastructure; processes, roles, and organization (Ehms and Langen, 2002). The KMMM received a strong influence of the CMM (Capability Maturity Model) of the Software Engineering Institute (SEI) at Carnegie Mellon University. Although the CMM (Paulk et al., 1995) is applied to the software development context, the KMMM adopts the same name for its five levels, and adapts the maturity concept to the KM domain. The five levels are: initial, repeatable, defined, managed, and optimizing. The maturity level is assessed for the individual topics and condensed into one maturity level for each key area. Berztiss (2002) also proposed a capability maturity model for KM based on CMM. The following requirements, defined by CMM as KPA (key process areas), were suggested for each KM maturity level: • • • • •
Level 1: absence of structured KM practices; Level 2: knowledge requirements management, internal knowledge acquisition, uncertainty awareness and training; Level 3: knowledge representation, user access and profiling; Level 4: integrated KM process, external knowledge acquisition, cost-benefit qualitative analysis; Level 5: technical change management and quantitative cost/benefit analysis.
In order to establish a more consistent link between market orientation and KM, the research model proposed in this chapter will also have the Knowing Organization Model (Choo, 1998) as a theoretical background. This framework describes
37
Market Knowledge Management, Innovation and Product Performance
organizations as systems where the processes of sensemaking, knowledge creating and decisionmaking are continuously interacting and combining external and internal knowledge. Sensemaking is closely related to market orientation as it is defined as how the organization interprets and makes sense of its changing environment which leads to shared meanings and intent. Knowledge creation is accomplished through the conversion and sharing of different forms of organizational knowledge, resulting in new capabilities, new products and innovation. Finally, the organization processes and analyzes information through the use of rules and routines that reduce complexity and uncertainty (Choo, 1998). Smith and Farquhar (2000) proposed four success statements for KM: (1) The organization knows what it knows and uses it and knows what it needs to know and learns it; (2) For any project, for any customer, the project team delivers the knowledge of the overall organization; (3) The organization delivers the right information, to the right people, at the right time with the tools they need to use it; (4) The perspective of the employees is aligned with that of the customers. Lindsey (2002) proposed a KM effectiveness model with two constructs: knowledge infrastructure capability and knowledge process capability. The first construct represents social capital (relationships between knowledge sources and users) and is operationalized by technology, structure, and cultural context. Knowledge process capability represents the integration of KM processes into the organization and is operationalized by acquisition, conversion, application, and protection of knowledge (Lindsey, 2002; Jennex, 2005). Massey et al. (2002) presented a KM success model with the following key components: strategy (knowledge processes, users, technology infrastructure), key managerial influences (top management support, KM leadership, KM metrics), key resources influences (mainly financial resources), and key environmental influences defined as the external forces that drive the orga-
38
nization to exploit its knowledge to maintain its competitive position. This chapter places more focus on the key environmental influences that are critical not only to KM success, but also to organizational success. On the other hand, after studying 147 organizations in 21 countries, Anantatmula and Kanungo (2005) stated that widely accepted criteria and performance measures have not been developed for KM. Their research questions were: what are the criteria for measuring KM success?; and how do managers use and understand these criteria to leverage their KM assets? Their research results implied that managers must consciously explore and establish the ambiguous relationships between KM results and bottom-line business measures. The authors also said that future research should focus on translating the soft measures of KM into detailed metrics. After conducting a panel with 30 KM experts and a survey with 103 members within a KM community, Jennex, Smolnik and Croasdell (2007) were able to establish the following base definition of KM success: “KM success is a multidimensional concept. It is defined by capturing the right knowledge, getting the right knowledge to the right user, and using this knowledge to improve organizational and/ or individual performance. KM success is measured using the dimensions of impact on business processes, strategy, leadership, efficiency and effectiveness of KM processes, efficiency and effectiveness of the KM system, organizational culture and knowledge content.” Indeed, KM success and KM itself are very broad concepts. For instance, Dalkir (2005) was able to produce a list with 72 very consistent KM definitions. Within the scope of this research, KM success was associated with innovation and the performance of new products in the market. This association was a way of translating a soft KM measure into bottom-line business measures, as
Market Knowledge Management, Innovation and Product Performance
suggested by Anantatmula and Kanungo (2005). We also think that this external and market-oriented perspective of KM success provides a better connection between KM and the strategy of the firm.
Main focuS of tHe cHaPter research Model This section explains the development process of the research model by connecting literature review with research constructs’ definitions. Most of the previous studies consider processes of KM that are mainly focused on pre-existent knowledge, or knowledge available in the market, so they usually do not consider new knowledge generated inside the organization, which has come from innovation. In this research, characteristics of the organization concerning structures, processes and culture are proposed as antecedents of innovation and performance in the market. In a way that is similar to Li and Calantone’s (1998) research framework, the model proposed in this research consists of three dimensions: external and internal antecedents, contributory factors and results. However, it differs from their model because the construct marketing-R&D interface was replaced by the technological KM construct. Thus, the three main processes of KM operate in a more homogeneous way with the aim of improving on the model proposed (Li and Calantone, 1998) more specifically, in regard to the market KM. The research model also adds the construct innovation as an antecedent of new product’s market performance, acting as a mediator variable between marketing KM and performance, in a similar way to that described by Hurley and Hult (1998). Figure 3 illustrates the general conceptual model of the research. The theoretical sources of the explanatory model are introduced next, classified by the construct involved in the model and its respective hypothetical relations.
Process of Customer Knowledge Management Market orientation proposes to enhance customerperceived quality of products and services by helping maintain superior customer value (Brady and Cronin, 2001; Kirca et al., 2005). Market orientation enhances customer loyalty and satisfaction because market-oriented companies anticipate customer needs and offer products and services to satisfy those needs (Slater and Narver, 1994). According to Verhoef and Leeflang (2009), several studies have revealed that marketing activies, such as creating satisfied customers and corporate advertising are positively related to shareholder value and greater customer lifetime value. By responding rapidly to changes in customer demand, a company can enhance customer satisfaction and loyalty, leverage the knowledge embedded in customers and take the advantage of the windows of opportunities that appear in the market from time to time (Khalifa et al., 2008). Consistent with the theories of organizational learning (Hair et al., 1998), the process of customer KM can be approached using Davenport and Prusak’s model (1998), which consists in generating, codifying and distributing the knowledge. The customer KM process would involve marketing research, regular meetings and interactions with customers, personal interviews and focus groups (Capon et al., 1992) and problem-solving sessions. Choo (1998) considered sensemaking as a dimension of the Knowing Organization model, Berztiss (2002) listed external knowledge acquisition as one of key indicators of KM maturity, and Ehms and Langen (2002) included environment and partnerships as a key area of the KMMM, giving support to two constructs in our research model: customer KM and competitor KM. Han, Kim and Srivastava in Narver and Slater (1995), Li and Calantone (1998), Jaworski and Kohli (1990), Hurley and Hult (1998), are some of the authors that have performed empirical research on customer orientation, innovation, market KM
39
Market Knowledge Management, Innovation and Product Performance
Figure 3. Research Model
and management of customer-intelligence, and their role as antecedents of results and market orientation. Based in this analysis, hypotheses H1 and H2 were elaborated: H1:The more intense the process of customer KM is, the greater will be the intensity of innovation. H2:The more intense the process of customer KM is, the greater will be the intensity of the performance of the new products in the market.
Process of Competitor Knowledge Management Truly market-oriented firms identify competitive advantages based on satisfying both the current needs of customers and doing this better than competitors (Martin et al., 2009). Narver and
40
Slater (1990) have defined customer and competitor orientation as information acquisition and dissemination activities that are necessary to understand what buyers value and the strategies used by competitors in serving target buyers. This knowledge provides a framework to create superior value for customers relative to competitors. The process of competitor KM involves obtaining, codifying, storing and distributing information as a continuous activity of competitive intelligence gathering. Knowledge of competitors exercises a fundamental role in the competitive positioning of organizations (Day, 1999; Aaker, 1998; Geus, 1997). Geus (1997) affirmed:” the only source of competitive advantage in the future will be to learn (about the competition) faster than your competitors.” Previous empirical research analyzed the influence of competitors regarding intensity of market
Market Knowledge Management, Innovation and Product Performance
orientation (Narver and Slater, 1995). This fact confirmed that the proposed model lacked important data and led to the formulation of hypotheses H3 and H4 as follows: H3:The more intense the process of competitor KM is, the greater will be the intensity of innovation. H4:The more intense the process of competitor KM is, the greater will be the intensity of the performance of the new products in the market.
Process of Technological Knowledge Management Narver and Slater (1995), along with Jaworski and Kohli (1990), aimed to verify the empirical connections between technological change and the results achieved in the marketplace. Ehms and Langen (2002) included technology and infrastructure as a key area of the KMMM. However, Jennex (2005) proposes that for KM systems it is not the amount of use that is important, but rather, the quality of use and the intention to use. In their research, Li and Calantone (1998) failed to directly consider the management of technological knowledge. In view of this omission and because of the importance of verifying the influence of the management of technological knowledge on the innovation process and the results obtained in the market, hypotheses H5 and H6 were formulated: H5:The greater the intensity of the process of technological knowledge management is, the greater will be the intensity of innovation. H6:The greater the intensity of the process of technological knowledge management, the greater will be the intensity of the performance of the new products in the market.
Innovation and Performance of New Product in the Market Market orientation should improve a firm’s inovativeness and new product performance because it drives a continuous and proactive disposition toward meeting customer needs (Kirca et al., 2005). On the other hand, excessive formalization (definition of roles, procedures and authority through rules) is usually inversely related to market orientation because it may inhibit the development of effective responses to changes in marketplace (Jaworski and Kohli, 1993; Kirca et al., 2005). Furthermore, a short-term emphasis blocks innovation and reduces investments in brands, customers, and new business development (Verhoef and Leeflang, 2009). Ashok (1999) and Peters (1998) defended the idea that innovation is perhaps the most important element in generating competitive advantage. Workman et al. (1998) observed a correlation between the capacity of a company to innovate and its competitive advantage, considering both as just one construct. Verhoef and Leeflang (2009) defined innovativeness of the firm as the extent to which there is a strong emphasis on R&D, technological leadership, and innovations within the firm. According to Khalifa et al. (2008), agility refers to the ability to detect and seize continually and unpredictably changing market opportunities by assembling requisite assets, knowledge, and relationship with speed and surprise; while innovativeness depicts the ability of the organization to initiate and implement innovations at a faster rate (Hurley and Hult, 1998). Li and Calantone (1998) observed a positive correlation between the competitive advantage of a new product and the performance of a new product. Hurley and Hult (1998) observed positive relations between innovation, competitive advantage and performance, leading to the proposition of the following hypothesis:
41
Market Knowledge Management, Innovation and Product Performance
H7:The greater the intensity of innovation, the more significant will be the performance of the new product in the market. The concept of “Market Knowledge Competence” was introduced by Li and Calantone (1998) to cover the group of processes that generate and integrate market knowledge. But the authors made no attempt to verify reciprocal relations among these constructs, to check if they make up a broader conceptual framework. To test these possibilities empirically, the following hypotheses were formulated: H8:The process of customer KM influences and is influenced by technological KM. H9:The process of competitor KM influences and is influenced by customer KM. H10:The process of technological KM influences and is influenced by competitor KM.
Methodology First of all, a literature review was carried out. Some classic and seminal works on marketing knowledge, market orientation, KM, innovation and strategic marketing were analyzed (Treacy and Wiersema, 1995), as well as a series of previous pieces of empirical research that preceded this article. Based on the defined constructs, the measurement items were obtained from previous researches and the existing literature, focus groups with managers and specialist panels. A pre-test involving 46 respondents was made and analyzed. An 11-point Likert-type scale was adopted, so that they could be processed as continuous variables and possibly achieved a better measurement. The research constructs were generated by the following procedures:
42
Customers KM Operational definition: the items were obtained from Li and Calantone’s (1998) research, Jaworski and Kohli (1993), Ehms and Lagen (2002), Berztiss (2002), Choo (1998), focus groups and specialist panels. Competitor KM The items were obtained from Li and Calantone (1998), Jaworski and Kohli (1993), Ehms and Lagen (2002), Choo (1998), focus groups and specialist panel. Technological KM Operational definition: the items were obtained initially from research made by Cooper (1984), Davenport (1998), Nonaka and Takeuchi (1997), Ehms and Lagen (2002), Clark and Wheelwright (1995), Day (1999) and Ashok (1999) Innovation of New Products Operational definition: the items were obtained from Hurley and Hult (1998), focus groups and specialist panel. Performance of New Product in the Market Operational definition: the items were obtained from Li and Calantone (1998), focus groups and specialist panel.
data analysis It was decided to carry out a survey using mail as the principal mean of contacting the respondents. The sample consisted of 1,870 medium and large Brazilian industrial firms that are members of FIEMG (Confederation of Industries of the State of Minas Gerais, the 3rd largest state in Brazil). Most of the industries sampled are in very competitive markets, as clothing, packing, furniture and automotive industries. The companies in the sample had more than 30 employees. This cross section was selected because, in a pre-test, it was confirmed that the marketing structure in small organizations does not allow them to answer the questionnaire correctly. The questionnaires were answered by marketing managers in these companies. After making the
Market Knowledge Management, Innovation and Product Performance
Figure 4. Sample profile by industry
required calculations concerning sampling, it was concluded that, there should be at least 258 observations for a 5% of error in a confidence level of 95%. At the end of the survey, 387 valid answers were obtained. The operational part of the research began in January 2008 and ended in May 2008. The types of industries in the sample are exhibited in Figure 4:
construct Validity The returned questionnaires were checked for incomplete or blank data, since this is a very common occurrence in self-administered questionnaires. After this, a check was carried out for univariate and multivariate outliers. It was verified that the Mardia statistic -LISREL (Jöreskog and Dag, 1998) output -, designated PK, based on kurtosis and asymmetry functions, should have a value smaller than 3, which, on the basis of this practical criterion, would lead to acceptance of the hypothesis which states that multivariate normality was reached. All the constructs presented Cronbach’s Alpha values in conformance with the acceptance strip, that is, above or equal to 0.70. In order to examine the reliability more deeply, an analysis of the composite reliability was made. It was observed
that the composite validity of the constructs was above 0.5, which, according to Hair et al. (1998), is appropriate. An exploratory factorial analysis of the items by construct was carried out in order to verify unidimensionality (Germain; Droge and Daugherty, 1994). After the withdrawal of some items, based on the researcher’s judgment, compliance with the unidimensionality premise was obtained. To verify the convergent validity of the constructs, each construct was subjected to a factorial confirmatory analysis, in order to observe the significance of the weight of each item in their respective constructs. Bagozzi, Yi and Phillips (1991), as well as Im, Grover and Sharma (1998) indicate the need for such a procedure. The discriminant validity was obtained by a procedure recommended by Bagozzi, Yi and Phillips (1991). In the case of the sample, all the constructs presented discriminant validity.
explanatory Phase The authors opted for the process of direct estimation, using the co-variance matrix as an entrance matrix (Hair et al., 1998). The chosen estimation method for this research was GLS (Generalized Least Squares), which, according to Hair et al.
43
Market Knowledge Management, Innovation and Product Performance
Table 1. Fit indexes of the research model χ2 (Chi-Square)
DF1
χ2 / DF
RMSEA2
GFI3
AGFI4
PNFI5
NFI6
P
91.01
75
1.34
0.029
0.975
0.954
0.583
0.895
0.101
Notes: 1 DF (Degrees of Freedom) 2 RMSEA (Root Mean Square Error of Approximation) 3 GFI (Goodness of Fit Index) 4 AGFI (Adjusted Goodness of Fit Index) 5 PNFI (Parsimony Normed Fit Index) 6 NFI (Normed Fit Index)
Figure 5. Path indexes of the research model
(1998), is an appropriate method of estimation when, taking into account the possible size of the sample, the data is moderately non-normal. The structural relationships for validation of hypotheses and models were obtained through the AMOS 4.0 program from SPSS. In a test of a model, the objective is to verify relations between KM processes (customer, competitor and technology), innovation and new product performance
44
in the market. The model fits well, as we can see in Table 1: The model is shown in Figure 5. According to Table 2, the critical t value at the 5% level is superior to 1.96, showing that the weights are statistically significant, except for the path between competitor KM and new product innovation. The research model considered innovation as a mediation construct between market KM and
Market Knowledge Management, Innovation and Product Performance
Table 2. Path analysis: non standardized weights and significance Paths
Estimated values
Standard error
Statistic t
P Value
Competitor KM → Innovation
0.105
0.065
1.619
>0.10
Customer KM → Innovation
0.094
0.048
1.978
<0.05
Technology KM → Innovation
0.294
0.049
5.981
<0.01
Customer KM ↔ Competitor KM
-
-
6.945
<0.01
Customer KM ↔ Technology KM
-
-
7.317
<0.01
CompetitorKM↔ Technology KM
-
-
6.549
<0.01
Technology KM → Product Performance
0.131
0.043
3.049
<0.01
Customer KM → Product Performance
0.086
0.043
2.006
<0.05
Competitor KM →Product Performance
0.259
0.065
3.964
<0.01
Innovation → Product Performance
0.246
0.049
4.966
<0.01
Table 3. Evaluation of hypothetical relationships Hypothesis
Hypothetical relation
Results obtained
H1: Management of Customer’s Knowledge → Innovation
Positive
Confirmed2
H2: Management of Customer’s Knowledge → New Product Performance
Positive
Confirmed2
H3:Management of Competitor’s Knowledge → Innovation
Positive
Not Confirmed3
H4: Management of Competitor’s Knowledge → New Product Performance
Positive
Confirmed1
H5:Management of Technological Knowledge → Innovation
Positive
Confirmed1
H6: Management of Technological Knowledge → New Product Performance
Positive
Confirmed1
H7: Innovation → New Product Performance
Positive
Confirmed1
H8: Management of Customer’s Knowledge ↔ Management of Technological Knowledge
Positive
Confirmed1
H9:Management of Customer’s Knowledge ↔ Management of Competitor’s Knowledge
Positive
Confirmed1
H10:Management of Competitor’s Knowledge ↔ Management of Technological Knowledge
Positive
Confirmed1
Notes: 1 Estimate is positive and significant (p < 0.01) 2 Estimate is positive and significant (p < 0.05) 3 Estimate is not significant (p > 0.05)
the performance of a new product in the market. Innovation, as we might expect, had a strong influence on performance, as well as on technological KM. The process of customer KM had less influence on performance than on competitors or technology – an interesting result. In the organizations sampled, new products with representative success in the market were generated by a stronger focus on competitors (ß=0.31), innovation (ß=0.29) and technologi-
cal (ß=0.21) aspects, compared to customer KM (ß=0.13). It seems that in this particular industrial market, organizations that obtain more expressive results establish their product strategies by applying a process that is based mainly on a careful analysis of competitor’s actions and tactics. An analysis of the influence of market KM constructs in innovation revealed that innovation came mainly from technological KM (ß=0,40) and from customer KM (ß=0,12). It was ascertained
45
Market Knowledge Management, Innovation and Product Performance
that a focus on technology was the most important source of innovation in the model analyzed. Table 3 allows evaluation of the hypothetical relationships proposed.
Main findingS Market KM has been seen as an important area of research from the point of view of managerial efficiency. However, our current understanding of that phenomenon is restricted, since market KM in organizations is a complex area that presents difficulties of conceptualization and measurement. Menon and Rajan (1992) stated that organizations have become more market oriented and, as a result of that emphasis, the application of the marketing intelligence concepts, as well as the use of information generated by research, has acquired strong relevance. The first aspect to be considered is that the three processes of KM considered in this research are elements that enhance the results of new products. Such results are, as a whole, consistent with those obtained by Li and Calantone (1998). However, certain differences about those results have to be considered. In the results of our research, the order of importance of the three processes as antecedents of performance was the following: competitor knowledge (ß = 0.31), technological knowledge (ß = 0.21) and customer knowledge (ß = 0.12). According to this model, a greater improvement in the management of competitor knowledge was the process that generated more results with new products. This probably occurred because of the process of product strategy planning, which seemed to be essentially a comparative process between competitors’ products (ß = 0.31). Such empirical verification raised questions about the statement found in some marketing texts that “the customer is the main focus”. Regarding this point, it was noted that the process of creation of successful products (with actual results reflected in the market) is a process that is much more based on
46
competing products (Hamel and Prahalad, 1995) and on technology, than on actual knowledge of customers’ wishes. From the Knowing Organization model perspective (Choo, 1998), it was found that our sample of organizations is giving priority to make sense of their competitors rather than understanding customer needs. The processes of KM had important bilateral connections, demonstrating that they perhaps may be considered to be a wider conceptual element. Jaworski and Kohli (1993) observed in their research that there is a low correlation between market orientation and the results obtained. In this research, and if we consider that the more intense processes of market KM become, the greater the degree of market orientation, one can infer that these correlations may be more significant than the ones obtained by the above-mentioned authors. According to this research, it can be concluded that the processes of market KM positively influenced the results of new products in the market and that the degree of influence varied depending on the modality of knowledge (customers, competitors and technology). The process of customer KM had less influence on performance than on competitors or technology, an interesting result. In the organizations sampled, new products with representative success in the market were generated by a stronger focus on competitors (ß=0.31), innovation (ß=0.29) and technological (ß=0.21) aspects, compared to customer KM (ß=0.13). An analysis of the influence of market KM constructs in innovation revealed that innovation came mainly from technological KM (ß=0.40) and from customer KM (ß=0.12). It was ascertained that focus on technology was the most important source of innovation in the model analyzed.
concluSion This study brings to light the importance of the processes of market KM in obtaining innovation
Market Knowledge Management, Innovation and Product Performance
and performance in new product development (Olson et al., 1995). Li and Calantone (1998) pointed out that organizations need to manage the three processes of Marketing Knowledge (customers, competitors and technology) in the process of development of new products. Day and Wesley (1988) argued that companies should manage marketing knowledge and be market oriented in a balanced way, since it is important for focus on customers and competitors. However, the research presented in this paper revealed that this might not be the best approach. The data obtained in this research suggested investment in management of knowledge of competitors, innovation and management of technology knowledge, in this order. It was observed that the “focus on the customer” or the “orientation to the customer”, elements mentioned in significant number of marketing books, is not the main driving factor of performance in new products in the industries investigated. It seems that the focus on the comparative advantage of products is more important than focus on the customers’ desires, because the purchase decision is a comparative process of value. Thus the following questions could be formulated: Is customer orientation a profitable choice? What is the best balance among competitors, customers and technology? Do such factors vary according to industry, trade or services? Some of these topics can be important elements for future research opportunities. On the other hand, innovative companies present processes of KM that focus more closely on technology and customer knowledge. This fact leads to the following possibility: the focus on technological knowledge is the main factor in innovation, and it should receive greater investment and attention than the others. This procedure should be guided by customer knowledge, in the manner proposed by Ashok (1999). However, at present this does not happen. The processes of market knowledge management also stood out as important generators of results regarding new products. This fact is
connected to the strategic decisions made by the organizations that have valid market information obtained through the processes of market KM, emphasizing the importance of managing them in a structured, systematized and pragmatic way. Market-oriented firms fundamentally are learning organizations because through time members of the firm share common experiences that become shared process and market schemas (Gebhardt et al., 2006). According to the authors, theses schemas enable organization members to communicate and collaborate effectively in the process of gathering, disseminating, and reacting to market intelligence. Finally, Gebhardt et al. (2006) state that the importance of organizational learning for market orientation extends beyond simply encoding the lessons of history, because organizations also develop the capacity to evolve. The field of KM success is making gradual progress. It is necessary to face the challenge of both carrying out studies such as this one, which focuses on specific areas of organization, such as marketing, and also of identifying and investigating the links between KM and organizational performance.
referenceS Aaker, D. A. (1998). Strategic marketing management. New York: John Wiley. Aaker, D. A., Kumar, V., & Day, G. S. (1998). Marketing research. New York: John Wiley. Anantatmula, V., & Kanungo, S. (2005). Establishing and Structuring Criteria for Measuring Knowledge Management Efforts. In Proceedings of the 38th Hawaii International Conference on System Sciences. Ashok, G. (1999). Business driven research & development: managing knowledge to create wealth. West Lafayette: Ichor Business.
47
Market Knowledge Management, Innovation and Product Performance
Bagozzi, R. P., Yi, Y., & Phillips, L. W. (1991). Assessing Construct Validity in Organizational Research. Administrative Science Quarterly, 36, 421–458. Berztiss, A. T. (2002). Capability Maturity for Knowledge Management. In Proceedings of the 13th International Workshop on Database and Expert Systems Applications (DEXA’02) Brady, M. K., & Cronin, J. (2001). Effects on Customer Service Perceptions and Outcome Behaviors. Journal of Service Research, 3(February), 241–251. Capon, (1992). Profiles of Product Innovators among large U.S. Manufacturers. Management Science, 38, 157–169. Carneiro, A. (2000). How does knowledge management influences innovation and competitiveness? Journal of Knowledge Management, 4(2), 87–98. Carvalho, R. B., & Ferreira, M. A. T. (2001). Using information technology to support knowledge conversion processes. Information Research, 7(1). Retrieved May 10th, 2009 from http://informationr. net/ir/7-1/paper118.html Choo, C. W. (1998). The Knowing Organization. New York: Oxford University Press. Choo, C. W., & Bontis, N. (2002). Strategic management of intellectual capital and organizational knowledge. New York: Oxford University Press. Clark, K. B., & Wheelwright, S. C. (1995). The product development challenge. Boston: Harvard Business Review. Cooper, R. G. (1984). The impact of new product strategies: what distinguishes top performers? Journal of Product Innovation Management, 1(2), 151–164. Dalkir, K. (2005). Knowledge management in theory and practice. Boston: Elsevier.
48
Davenport, T., & Prusak, L. (1998). Working knowledge. Boston: Harvard Business School Press. Day, G. S. (1999). Market Driven Strategy. New York: The Free Press. Day, G. S., & Wensley, R. (1988, April). Assessing advantage: a framework for diagnosing competitive superiority. Journal of Marketing, 52, 1–20. Gebhardt, G. F., Carpenter, G. S., & Sherry, J. F. Jr. (2006). Creating a market orientation: a longitudinal, multifirm, grounded analysis of cultural transformation. Journal of Marketing, 70, 37–55. Germain, R., Droge, C., & Daugherty, P. J. (1994). The Effect of Just-in-time selling on organizational structure: an empirical investigation. JMR, Journal of Marketing Research, 31, 471–483. Geus, A. (1997). The Living Company. Boston: Harvard Business School Press. Glazer, R. (1991). Marketing in an InformationIntensive Environment: Strategic Implications of Knowledge as an Asset. Journal of Marketing, 55, 1–19. Grant, G. M. (1996). Prospering in dynamicallycompetitive environments: organizational capability as knowledge integration. Organization Science, 7(4), 375–387. Hair, J. F. A., Rolph, E., Tathan, R. L., & Black, W. C. (1998). Multivariate Data Analysis. Upper Saddle River, NJ: Prentice Hall. Hamel, G., & Prahalad, C. K. (1995). Competindo pelo futuro. Rio de Janeiro, Brazil: Campus. Hult, T. G., & Ketchen, D. J. (2001). Does Market Orientation Matter? A Test of the Relationship Between Positional Advantage and Performance. Strategic Management Journal, 22(September), 899–906.
Market Knowledge Management, Innovation and Product Performance
Hurley, R. F., & Hult, G. T. M. (1998). Innovation, Marketing Orientation and Organizational Learning: An Integration and Empirical Examination. Journal of Marketing, 62, 42–54.
Kirca, A. H., Jayachandran, S., & Bearden, W. (2005). Market orientation: a meta-analytic review and assessment of its antecedents and impact on performance. Journal of Marketing, 69, 24–41.
Im, S., & Workman, J. P. (2004). Market Orientation, Creativity, and New Product Performance in High-Technology Firms. Journal of Marketing, 68, 114–132.
Kohli, A. K., & Jaworski, B. J. (1990). Marketing Orientation: The Construct, Research Proposition and Managerial Implications. Journal of Marketing, 30, 1–18.
Im, S. K., Grover, V., & Sharma, S. (1998). The use of structural equation modeling in research. (Report). Columbia: University of South Carolina.
Li, T., & Calantone, R. J. (1998). The Impact of Market Knowledge Competence on New Product Advantage: Conceptualization and Empirical Examination. Journal of Marketing, 62, 13–29.
Jaworski, B. J., & Kohli, A. K. (1993). Market Orientation: Antecedents and Consequences. Journal of Marketing, 57, 53–70. Jennex, M. (2005). The issue of system use in knowledge management systems. In Proceedings of the 38th Hawaii International Conference on System Sciences (HICSS). Jennex, M., & Olfman, L. (2005). Assessing knowledge knowledge management success. International Journal of Knowledge Management, 1(2), 33–49. Jennex, M., Smolnik, S., & Croasdell, D. (2007). Towards defining knowledge management success. In Proceedings of the 40th Hawaii International Conference on System Sciences (HICSS). Jennex, M., Smolnik, S., & Croasdell, D. (2008). Towards measuring knowledge management success. In Proceedings of the 41st Hawaii International Conference on System Sciences (HICSS). Jöreskog, K., & Sörbom, D. (1998). Lisrel 8: User’s reference guide. Chicago: SSI. Khalifa, M., Yu, A. Y., & Shen, K. N. (2008). Knowledge management system success: a contingency perspective. Journal of Knowledge Management, 12(1), 119–132.
Lindsey, K. (2002). Measuring Knowledge Management Effectiveness: a task-contingent organizational capabilities perspective. In Proceedings of the 8th Americas Conference on Information System (AMCIS), 2085-2090. Martin, J. H., Martin, B. A., & Minnillo, P. R. (2009). Implementing a market orientation in small manufacturing firms: from cognitive model to action. Journal of Small Business Management, 47(1), 92–115. Massey, A. P., Montoya-Weiss, M. M., & O’Driscoll, T. M. (2002). Knowledge in the pursuit of performance: insights from Nortel Networks. Management Information Systems Quarterly, 26(3), 269–289. Menon, A., & Rajan, P. V. (1992). A model of marketing knowledge use within firms. Journal of Marketing, 56, 53. Narver, J. C., & Slater, S. F. (1990). The effect of market orientation on business profitability. Journal of Marketing, 54(4), 20–35. Narver, J. C., & Slater, S. F. (1995). Market Orientation and the Learning Organization. Journal of Marketing, 59(3), 63–74. Nonaka, I., & Takeuchi, H. (1997). The knowledge-creating company. Oxford, UK: Oxford University Press.
49
Market Knowledge Management, Innovation and Product Performance
Olson, E. (1995). Organization for Effective New Product Development: The Moderating Role of Product Innovativeness. Journal of Marketing, 59, 48–62. Paulzen, O., & Perc, P. (2002). A maturity model for quality improvment in knowledge management. In Proceedings of the 13th Australasian Conference on Information Systems (ACSIS), 243-253. Peters, T. (1998). O Círculo da inovação. São Paulo, Brazil: Harbra. Porter, M. (1995). Vantagem Competitiva. Rio de Janeiro, Brazil: Campus. Slater, S. F., & Narver, J. C. (1994). Does Competitive Environment Moderate the Market Orientation–Performance Relationship? Journal of Marketing, 58(January), 46–55. Slywotzky, A. (1996). Value Migration. Boston: Harvard Business School Press.
50
Smith, R. G., & Farquhar, A. (2000). The road ahead for knowledge management, an AI perspective. AI Magazine, 21(4), 17–40. Treacy, M., & Wiersema, F. (1995). The Discipline Of Market Leaders. Reading, MA: AddisonWesley. Verhoef, P. C., & Leeflang, P. S. H. (2009). Understanding the marketing department’s influence within the firm. Journal of Marketing, 73, 14–37. Von Krogh, G. (1998). Care in knowledge creation. California Management Review, 40(3), 133–153. Weber, F., Wunram, M., Kemp, J., Pudlatz, M., & Bredehorst, B. (2002). Towards a common KM framework in Europe. In Proceedings of Unicom Seminar. Standardization in Knowledge Management. Workman, J. P., Homburg, C., & Gruner, K. (1998). Marketing organization: an integrative framework of dimensions and determinants. Journal of Marketing, 62, 21–41.
51
Chapter 4
Does KM Governance = KM Success?
Insights from a Global KM Survey Suzanne Zyngier La Trobe University, Australia
abStract This chapter examines factors that contribute to KM success by differentiating between KM leadership through management and through governance. We look at governance as a structural mechanism that both embeds KM into organizational activity, and lifts it from a series of initiatives to a structured program of activities that are subject to authority, policy, risk management, financial fiduciary duty, and evaluation. Using evidence from 214 respondents to a global internet based KM survey; we find that having a recognized and defined authority for KM that is well-resourced leads to strategically aligned benefits realized from investment in KM. We demonstrate that governance through assigned authority strongly contributes to strategic KM success.
introduction The implementation of knowledge management (KM) programs continues to be contentious because it is frequently difficult to establish the return on investment. KM practitioners and theoreticians understand and undertake to create a sustainable program of strategies to leverage knowledge to fulfill organizational aims and objectives and to realize benefits from them. However, many KM strategies implementations fail to realize the DOI: 10.4018/978-1-60566-709-6.ch004
expected benefits or the return on investment sought by their organizations (Kulkarni, Ravindran, & Freeze, 2006). This in turn impacts on the sustainability of those strategies (Zyngier, 2005). In information-based economies, knowledge is one of the key organizational resources and a competitive differentiator (Spender, 1996). The management of organizational knowledge is a complex imperative that is taken up by forwardlooking strategic leadership. Therefore, we asked does KM Governance contribute to KM success? In answering this question we differentiate between management and governance in look-
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
Does KM Governance = KM Success?
ing at governance as a structural mechanism that both embeds KM into organizational activity, and lifts it from a series of initiatives to a structured program of activities that are subject to authority, policy, risk management, financial fiduciary duty, and evaluation. These are delivered in a structured manner in order to achieve the strategic aims and objectives of the organization. KM governance is defined as the exercise of authority over strategies to manage organizational knowledge for the realization of anticipated benefits (Weill & Ross, 2004). This chapter will present the facets of governance as authority, policy development, measurement, risk management and financial responsibility. We describe how these activities are delegated and amplified into the development and implementation of KM strategy. Previous research into patterns of governance of knowledge management has focused on individual cases. This work presents a furtherance of this discussion by presenting new global research by web-based survey. We report on the results of 214 respondents to a global survey on KM governance – data collected from mid-March to the end of May 2008. Respondents came from 34 countries across every continent indicating that KM is truly a widely practiced business activity. The survey respondents identify the structures of KM governance in their organizations, and the roles and tasks of KM governance activities. Against this background of clear KM governance, coupled with indicators such as increased funding of KM and longevity of KM programs of activity, and the perceived realization of the strategic benefits of KM, we show those structures that can be deemed successful. Analysis and discussion leads the reader to understand when, where and how through the structural support of KM governance, success will more readily realized. Conclusions about the utility of this statistical data collection and analysis suggest that outputs of research such as this as a continuum of mixed methodology are valid and strengthen Informa-
52
tion Systems research. It concludes that leveraging knowledge through programs of activities is a global phenomenon. KM governance as the exercise of authority over the development and implementation of strategies to manage organizational knowledge enables the long-term realization of anticipated benefits therefore, KM governance leads to KM success. The sections are arranged as follows. The concept of KM success is introduced in the context of the alignment of KM with business strategy to support the aims and objectives of the organization. We propose that this is clearly achieved through the effective governance of KM. KM governance activities are examined and contextualized. Following the reader is presented with the design and the results of the survey and conclusions are drawn.
underStanding KM SucceSS This examination of the literature focuses on understanding the realization of KM success as the effective leveraging of organizational knowledge resources to achieve the aim and objectives of the organization. Leveraging knowledge resources includes harnessing both tacit and explicit forms as appropriate to the industry sector and the specific needs of that organization. We suggest that by making the alignment of KM strategy and business strategy explicit through authorized direction at executive level of the organization it facilitates KM success. It does this in several ways: it transparently permits policy development to align KM, it allows management of risk to KM strategy, it gives direct to the fiscal security of KM activity, and finally it creates a frame for the evaluation or measurement of KM outcomes against the strategic aims and objectives initially developed. This enables the review and revision of policy and direction against any changes in strategic intent within the organization. How then can we understand indicators of KM success?
Does KM Governance = KM Success?
business Strategy and KM alignment The mission and values that underpin business strategy correlate with the ontology of the organization. A business strategy is a formed plan that puts together in a coherent form, the substantive goals, policies and activity of the organization into a logical and articulated series of statements (McKay & Marshall, 2004). The association of KM with business strategy and alignment is linked to the mission and values of the organization (De Tienne, Dyer, Hoopes, & Harris, 2004; Henderson & Venkatraman, 1999; Zack, Mc Keen, & Singh, 2006). Measurement and evaluation support and clarify alignment of the outcomes of business strategy implementation with the KM strategy development and implementation. Mintzberg, Ahlstrand, & Lampel, (1998) suggest that developing a plan requires us to look ahead, yet commonly strategy is more of an organizational pattern decided on the basis of past behaviors. Therefore, a pattern or strategy that is realized but was not devised or intended can consequently be described as ‘emergent’ (Mintzberg, 1994). The term ‘emergent’ implies no control over the development of the strategy. It would be unrealistic to expect to be able to predict all of the consequences or outcomes of a strategy. Some outcomes will be expected while others are consequential on the human context and both internal and external factors that impact the organization. Strategic leadership leverages all classes of organizational assets: human, financial, physical, IP, informational, technological and relationship (Nonaka & Takeuchi, 1995; Schultze & Stabell, 2004; Spender, 1996). KM is understood as a broad concept that addresses the full range of processes by which the organization deploys knowledge. These processes involve the acquisition, distribution and use of knowledge in the organization - the processes include interpersonal, social-technical and technical modes. Knowledge exists in both
explicit and tacit forms and therefore differing strategies are required to leverage and deploy it effectively (ActKM discussion forum, 2006; Dalkir, 2005; Lavergne & Earl, 2006; Tanudjojo & Braganza, 2005; Tiwana, 2002). There are a broad variety of tools and techniques that are canvassed in the literature, discussed at conferences, and on KM list-servs (Riege, 2005; Sun & Scott, 2005). These are implemented with various levels of organizational utility and of user acceptance (Foss, Husted, Michailova, & Pedersen, 2003; Schroeder & Pauleen, 2007; Stehr, 2003; Wiig, 1997b, 1999). A strategic outcome of the development and implementation of a KM strategy should be the creation of value to the organization. There are multiple dimensions of value in leveraging organizational knowledge resources. Aligned to the business strategy, they can be understood from the multiple perspectives of finance, innovation, business processes, customer value and human value to organization. (Kelleher & Courtney, 2003; Smits & de Moor, 2004).
governance of Knowledge Management KM governance is still mentioned infrequently in the extant literature (Jennex & Zyngier, 2008) and within that spectrum, the roles and tasks of KM governance were not fully explored until first developed by Zyngier, Burstein, & McKay (2004) then finally defined by Zyngier (2005). KM governance demands purposeful consideration of knowledge strategies in place in the long and medium term. KM governance uses measurement of the effectiveness and efficiency of all aspects of KM as a tool to ensure that KM objectives and benefits are realized. A governance framework operates across key domains: principles, infrastructure strategies, architecture and investment, and prioritization. Governance provides check and balance mechanisms that enable the decision-making processes
53
Does KM Governance = KM Success?
and resulting processes add valuable function in delivering service to the enterprise. Emphasis on strategy, risk-management, delivering financial value and performance measurement indicates the ongoing management of best practice. Applied to organizational IT, it is suggested that ‘at the heart of the governance responsibilities of setting strategy, managing risks, delivering value and measuring performance, are the stakeholders values, which drive the enterprise and IT strategy (IT Governance Institute, 2001, p. 10). This mechanism feeds back both the positive and negative aspects of performance to moderate and improve practice. Governance of KM guides strategic thinking about policies and strategies implemented - both in the long term and in the medium term. KM strategies are therefore never static. Governance provides a mechanism for feedback, allowing for the dynamic modification and development of existing processes and practices. Through development of policy, KM governance determines how decisions are made and how solutions are developed for resolving obstacles, strategic alignment of strategy and corporate strategy, risk management, access to knowledge resources and under what conditions, quality maintenance. Similar to IT Governance the development of policy to guide strategy, the management of risks, the delivery of value and measurement of performance, while reflecting the stakeholder’s needs and values, drives the KM strategy. (IT Governance Institute, 2001). How this function is fulfilled can be measured in the timeliness of service delivery and the satisfaction levels of the internal and external stakeholders. Linked to the strategic goals, the mission and the values of the organization, then the KM strategy must support the work and transfer of knowledge in the organization. This should be reflected in the policies that underpin KM strategy development and implementation. It is the role of governance to establish policies for this purpose. We conclude that authority, financial responsibility, risk
54
management, measurement and evaluation, and benefits realization are all aspects of governance. Governance therefore comprises the following elements to varying degrees according to the scope defined by each organization: development of policy to define KM for the organization; transparency of terms of reference; authorization of strategy; oversight of risk management; oversight of financial management and allocation of resources; review and revision of policy; and the prioritization of strategic knowledge needs. In the next section of this chapter develop the attribute of KM governance and strategic outcomes.
Authority Authority is different to power. The exercise of power has organizational costs and benefits. O’Dell and Leavitt (2004) suggest that the exercise of power may require trade offs against interpersonal relationships. The exercise of legitimated power is expected within organizational context. This is authority rather than power as authority is provided by rank or position within an organization. The position of the individual is endowed with power - that is the position has authority regardless of the charisma or personal power of the holder of that office (Weber, 2005). The exercise of authority may strengthen the authority base from which it derives and serves. Individuals with authority are expected to exercise that authority and may be viewed as negligent, lax or inattentive to their responsibilities when they fail to do so (Pfeffer, 2005). KM governance formalizes decision-making authority that exercises financial responsibility, evaluation, measurement, and risk management in service delivery. KM governance ensures that these processes meet organizational needs and expectations or to resolve problems arising “by setting the rules of the game” (Braganza, Hackney, & Tanudjojo, 2007, p. 12). The development of KM policy enables the satisfaction of each of these elements through the clear identification
Does KM Governance = KM Success?
the view of what the firm and its stakeholders need and envision from KM that in turn gives a context for the development and implementation of KM strategies.
(Chua & Lam, 2005; De Long & Fahey, 2000; Wiig, 2004; Zyngier, 2002)
Risk management
Organizations frequently cite financial constraints as an obstacle to the development of a KM strategy (Chua & Lam 2005). KM activity as with other IS investments is seen as a cost centre or service activity. It is funded either by cross-business-unit investments or from central head office funds (Swart & Kinnie, 2003). Such expenditure or internal investment is seen as supporting cross-business-unit, inter-businessunit or intra-business-unit activity. Governance processes seek accountability for expenditure on KM strategy based on a rationale for the financial allocation that is required (Van Grembergen, De Haes, & Van Brempt, 2007; Weill & Ross, 2004). The governing authority is informed and able to effectively approve or reject a major financial request, although smaller financial decisions are often delegated to management at below governing entity level (Kaen, 1995). Measurement activities support financial responsibilities in that they provide the evaluation measures required for effective ongoing financial responsibility. These activities ensure that a KM strategy can also be fiscally viable through its alignment with the policies and goals of the organization (Zyngier, 2007)
By engaging with and understanding the risks to KM strategy development and implementation it becomes possible to develop a means of risk resolution. Risk management is a proactive strategy of analysis and anticipation of risks to an information systems strategy before they arise (International Standards Organization, 2006; Standards Australia, 2005). The resolution of such risks may require organizational change management, the provision of additional financial or infrastructural support, or a realignment of the original strategy in light of unforeseen or emergent activity within the organization. Risk management requires regular evaluation of the strategy and the organization that it serves. Governance processes manage the risks of KM to acknowledge and challenge the cultural issues, structural obstacles and other relevant issues as they arise during the implementation and ongoing operation of the strategy. The management of these risks assisting in their resolution and strengthens strategies to manage knowledge within the organization. The need for risk management in KM was formally indicated in 2005 (Standards Australia) with the need to identify assets, the risks and controls associated with the implementation of strategy. The literature suggests that obstacles to the effective management of organizational knowledge include a management culture in the organization that hinders KM, with concomitant change management issues. Additionally the philosophy of knowledge management is often understood inadequately in the organization and conflicts of organizational priorities are problematic for the development and initiation of a KM strategy. For many organizations, the development of criterion for knowledge collection is difficult.
Financial Fiduciary Duty
Measurement and Evaluation KM governance processes incorporate evaluation and measurement in order to prove the value, to progress and to develop existing practices. Evaluation looks at both successes of and obstacles to the implementation of a KM strategy. Evaluation of successes must take into account the contribution made to the aims and objectives of the organization (Zyngier, 2007). Where the successes make a contribution then they should be continued. Where they do not make a contribution then consideration should be given to their continuance.
55
Does KM Governance = KM Success?
There are a number of criteria used to establish the return on investment for KM strategies. Simple measures include staff retention or in improvement of “product to market” delivered on time, in quantity and quality (Kelleher & Courtney, 2003; Smits & de Moor, 2004) These include look at human capital growth (Liebowitz & Suen, 2000), intangible assets (Sveiby, 2001), the Balanced Scorecard (Kaplan & Norton, 2001) examination of the normative, operational and strategic goals of the strategy to see if they are being met (Probst, Raub, & Romhardt, 2000).
establishing KM Success Deliberate strategies are usually articulated as a plan. Mintzberg (1994) suggested that strategic planning processes fail when they are not constructed to understand, internalize and synthesize, that is to learn from the successes or failures of the strategic process as it is implemented. It can be suggested that some approaches to KM are vulnerable unless the strategy is conceived of as a learning process. The step of articulating a suitable KM strategy or series of programs for an organization is only part of the challenge. Ensuring effective implementation and ongoing development and redevelopment are vital to the success of a KM strategy that is capable of response and change. Devising appropriate KM processes, structures, policies or principles and mechanisms to ensure sound decision making over time are required and these are all indicative that knowledge management governance is required. KM governance acts to support organizational governance through transparent activity. It provides a mechanism for managing the identified risks to knowledge assets in a planned and fiscally responsible manner. Organizational governance controls, directs and supports KM governance through the establishment of the organizational aims and objective of the business strategy. KM governance should enable the strategic alignment of KM strategy development and implementation
56
in the same way that corporate governance enables strategic alignment of organizational activity through policy creation. There is some evidence of research that mentions the appropriateness of governance activity for KM development and implementation (Foss, 2007; Foss et al., 2003; Schroeder & Pauleen, 2005; Wiig, 1997a, 1999). However, until 2004 there was little indication of a link between theoretical attributes of governance of KM in the roles and tasks required of it, nor of the flow of authority and communication within it until described and defined by Zyngier (2005, 2007). KM governance systematizes and schematizes the following elements as follows (Zyngier, 2005, 2006; Zyngier, Burstein, & McKay, 2005, 2006): •
•
•
KM governance body authorizes activity of and is given evidence of outcomes by those responsible for KM strategy development. KM is implemented according to defined and developed strategy. Outcomes of the strategy are reported back via the KM strategy development to the KM governance body for reflective action. KM governance positively contributes to the realization of strategic benefits from managing organizational knowledge.
The approach to governance is supported by the KM maturity models of Ehms & Langen (2002) and Weerdmeester, Pocaterra, & Hefke (2003) that describe the stages of maturity as initial, aware, established, quantitatively managed and optimizing. Initial is similar to those who are curious and exploring KM or are tentatively taking their first steps in implementation. At this stage there is little current level of knowledge, control and no recognized connection between the success of the organization and its knowledge resources. The second stage of the maturity model is where KM is repeated. There is a recognized connection between the success of the organization and its knowledge resources, with some pilot KM proj-
Does KM Governance = KM Success?
ects in place. The next stage in maturity is called defined where there are KM activities in place, a technical infrastructure and KM roles are defined filled by staff. This equates to this research’s stage that the organization has implemented several KM initiatives. It is suggested that when KM is operating organization wide then it can be considered managed with KM solutions manifest and standard organization wide and that “activities are secured in the long term by organization-wide roles and compatible socio-technical KM systems.” (Ehms & Langen, 2002, p. 3) in this situation KM measurement can be implemented meaningfully. The final stage of maturity is described as optimizing where the organization is able to adapt its KM strategies to meet new requirements and to adapt to changes in the organization or in the industry. In the following sections we present the reader with research methodology and then the survey results that demonstrate the supportive impact of KM governance on the development and implementation of knowledge management initiatives.
reSearcH deSign This research aimed to investigate patterns of governance and outcomes in strategies to leverage organizational knowledge the phenomena of governance over KM. The research design describes a questionnaire grounded in the theoretical KM literature and that was adapted from previously validated research surveys designed and used in the European Union and was subsequently again used twice in a longitudinal study of the understanding and uptake of KM in Australia (Ionescu, Burstein, & Zyngier, 2006; Murray, 1998; Zyngier, Burstein, & Rodriguez, 2003). Those areas of the instrument adapted to include elements of governance was informed by constructs derived through recent related case study data collection (Zyngier, 2006, 2008; Zyngier & Burstein, 2009). This research was directed to those engaged in KM activity in: public and private companies, and
government and semi government organizations. It was conducted as an anonymous web-based survey (survey questions appended). The data was collected using nine online closed KM discussion forums and list-servs and discussion forums based in Europe, Canada, the USA, India, Australia, and Malaysia. Distribution lists included members from ‘blue chip’ companies, SME, government and not-for-profit institutions. Limitations of using this frame to obtain respondents can include biased responses, the representativeness of population and possible low response rate. Specifically, members of such discussion boards have a personal interest in KM related concepts; therefore they may answer differently to respondents who do not. Due to the email method of subject recruitment this sample cannot be said to be representative of all organizations or of the opinion of all KM practitioners and does not represent the population (Dillman, 2000). Sampling issues are the same for internet and for paper based surveys although with the internet it is more difficult to verify. (Jansen et al., 2007) There is a legitimate problem in the use of volunteers from the internet. Non-response bias is countered by researcher in that the estimate that the response rate from the 9 discussion groups gives a total population of 5500 members. However it must be strongly stressed that the experience of the researchers also indicates that there is very large overlap of memberships between discussion groups. Many members’ email addresses regularly appear in postings on up to three groups on a weekly basis. This evidence suggests that the overlap may be in the order of up to 25% with a total sample frame being closer to 4125. This being the case then the response of 214 individuals may equate to 5.3%. This is acknowledged as a low response rate however the average response rate for an unsolicited survey with no personalization of address and no follow up is 5%. It is also acknowledged that due to the email method of subject recruitment, this sample cannot claim to be representative of all organizations that have KM programs or of the
57
Does KM Governance = KM Success?
opinion of all KM practitioners (Dillman, 2000). However, the number of responses is considered large enough and sufficiently diverse to be statistically indicative (Dillman, 2000). Therefore the response and the results are cautiously regarded as valid through comparison with similar validated surveys. The survey data revealed broad openness of opinions. This openness – and in some cases blunt honesty - provides indicative trend data in an understanding of the current global approach to KM. It can be reasonably argued that this population sample “is a microcosm of the [KM] population” (Bryman & Bell., 2007, p. 731) and that these are therefore, both representative of the selected population, and accurate informants. The survey was prefaced by an explanatory cover letter. The survey instrument comprised 22 multiple choice questions and three questions that required a text based response. Of these 13 multiple choice questions collect organizational data, and nine multiple choice questions collect demographic data. The sections comprised knowledge management definitions, the tools and techniques in the management of knowledge as an asset, cultural aspects of knowledge management, knowledge use in the future and obstacles to its management, structural mechanisms that support the development and implementation of KM strategies and the final section sought both organizational and individual demographic information. The respondents were required in some questions to tick appropriate responses using attitude questions in the questionnaire. This allowed ranking of agreement to a statement relative to positive and negative endpoints of a five- point Likert scale. The questionnaire was timed to take approximately 14 minutes to complete. The results have been analyzed using the statistical software package SPSS 16.0 for Windows and employs descriptive and inferential statistics. Analysis takes account of the possibility of the acquiescent response set where the respondent may develop a pattern of agreeing with all the items. Participants
58
Table 1. Defining KM Definition
Response rate
Business focused creation dissemination & utilization of knowledge to fulfill org. objectives
49.5%
The capability to create store retrieve and to apply knowledge
32.8%
The use of IT to capture data and information to manage knowledge
11.3%
Intellectual assets – documents and information bases
6.4%
were offered the opportunity to be informed of the aggregate results of this research.
SurVeY reSultS: KM goVernance and Strategic ManageMent Respondents came from 34 countries across every continent indicating that KM is truly a widely practiced organizational activity. The first section of the survey deals with the respondent’s definition of KM and issues relating to KM strategies contributing to the achievement of business goals. The responses as shown in Table 1, defining an organization’s understanding of KM indicated a strong trend of understanding KM as a business focused approach that comprises the whole collection of processes that govern the creation, dissemination and utilization of knowledge to fulfill organizational objectives. This definition clearly separates respondents who see KM as being the capability to from those who have a more strategic approach to the utility of organizational KM. The other options offered were: about intellectual assets, and as a technological concept. Only four percent of respondents chose not to define that understanding. The research then sought to establish from the data, which role has authority over the KM strategy. This survey instrument suggested that these might be a CEO or Managing Director, a
Does KM Governance = KM Success?
Figure 1. Authority for KM
Chief Knowledge Officer or IP Officer or Chief Learning Officer, a Chief Information Officer, a stakeholder group, a Director of HR, a Consultant, or perhaps no nominated person. The inclusion of a stakeholder group was new among these variables but was substantially represented in the data. The concept of the stakeholder theory embraces notion that the needs of the spectrum of interests spanning the organization’s owners or shareholders (embodied in the strategic aims of the organization), the spectrum of employees should all be acknowledged in organizational decision making (Van den Berghe & De Ridder, 1999). Added to this is the need to engage with staff who both deliver elements of a KM strategy, and with those who are expected to participate in the sharing of knowledge. This may lead to better buy-in from both sides – in delivery and in activity of a KM program of strategies. Case study and anecdotal evidence points to the appointment of a CKO, a CLO or a Chief of IP, but that these roles are often misunderstood, under resourced bound to fail (Desouza & Raider, 2006; Lakshman, 2007). As can be seen in Figure 1, we asked about authority for KM rather than
leadership of KM, it is found that authority most often lies at the executive level of the organization. Evaluating the data to establish the responsibilities of that authority role is presented in Figure 2. The responsibilities suggested were modeled on constructs established through case studies, with a text based field available to the survey respondents to make additional comment. The revealed roles from Figure 1 above all involved are setting knowledge management priorities; the development of policy to strategically align and guide strategy; evaluation or measurement of performance; review and revision of policy in the context of performance; oversight of risk management; and financial responsibility for investment in KM. As can be observed across each role the least important role is the oversight of risk management while both the prioritizing of strategic knowledge needs and authorization of strategy are of greatest but not equal importance in each case. There is however clear variance between the responsibilities. For the CEO and for the stakeholder group the prioritization of strategic knowledge needs is reported more highly than KM policy develop-
59
Does KM Governance = KM Success?
Figure 2. Governance roles and responsibilities
ment, while the obverse is true for the CKO and the CIO that prioritization of knowledge needs is secondary. Drawing back to the literature, we can confirm that all authority roles demonstrate KM responsibilities revealed hitherto only in case study analysis, and are affirmed here by this global survey. These responsibilities are proportionally represented although there is variance between the roles in the extent of responsibility for each facet of the governance spectrum of activity. We sought to investigate the variables that might influence the distribution of roles. Those investigated were size, distribution and constitution of the organization cross-tabulated with the named roles of governance. In Figure 3 an unambiguous consistent pattern of governance authority emerged in relation to the size of the organization. The large organizations with more than 1000 employees and the small organizations with fewer than 50 employees stood out in their
60
selection of a CKO or the CEO as the role with authority for KM. Stakeholder groups and the CIO were also nominated albeit to a lesser degree by large organizations. Small to medium size enterprises employing between 100 and 1000 people showed a more even spread of authority for KM. It is noteworthy that only very few very large or very few small organizations retained consultants and endowed them with authority roles. In small organizations, this might indicate an overall lack of capacity to employ new staff with appropriate skill sets or the lack of time or skills for existing staff. Of the large organizations, the employment of consultants may indicate an overall lack of capacity to understand the skill sets required for KM that could be met by internal staff resources, or a lack of time or skills within existing staff structures. It may also indicate in both very large and very small organizations, a culture and preference for the engagement of consultants.
Does KM Governance = KM Success?
Figure 3. Cross tabulation of size of organization and authority for KM
Looking in more detail at the two predominant responsibilities, we suggest that both the prioritization of strategic knowledge needs and KM policy development relate clearly to the strategic alignment of KM with organizational aims and objectives. We know from the data, that of the respondents 64.7% agree or strongly agree that their organization’s KM program of strategies is designed specifically to contribute to achieving business goals. In Figure 4 it is evident that strategic alignment does not equal the successful realization of benefits, however the active pursuit of this goal does indicate a tendency to reap KM rewards. What becomes pertinent is to establish the other indicators of success for those organizations. It is not enough to say that benefits had been realized and that all other organizations need to do is to emulate that. In order to learn from other cases, it is also important to examine other likely variables that will further illuminate the road to KM success.
Because the volatility of the environment moderates the strategic value that can be achieved. The capacity of the organization to manage knowledge resources has a consequent impact on its performance (Zack & Street, 2007). Examination of the distribution and utilization of various KM tools and techniques can only be of use in the context of the size, distribution and perhaps the industry sector in which each organization can be found. Responses by country may merely reflect the availability of the range of tools and techniques. Now seeking to define where realization of benefits is derived this research returns to structural elements in KM. These are taken as being the maturity of the KM strategy, and the issue of KM governance. We observe results describing the instance of the maturity of the KM strategy and any relationship realization of strategic benefits. It is evident in Figure 4 below that there is a clear trend towards the realization of strategic benefits when KM is
61
Does KM Governance = KM Success?
Figure 4. Maturity of KM strategy development and implementation
escalated through the stages of maturity beyond being established through the implementation of several initiatives to operating organization wide and is ultimately optimized through inclusion in strategic planning. Our trend line moves from the realization of whatever benefits can be achieved through exploration and initial implementations, to a reported success rate of 92% when organizations express their KM as mature and included in their strategic planning. In Figure 5 below a similar strong trend is seen in cross tabulation of governance authorities and the realization of strategic benefits. It is clear that a greater number of organizations realize strategic benefits where their KM strategy is governed at the most senior executive level by a committed and interested CEO or Managing Director, by a CKO or similar or by a stakeholder group. From simple examination of these two figures we might well conclude that in order to realize strategic benefits of a KM strategy development and implementation, an organization ought to ensure that their CEO has ultimate authority in
62
governing their KM. that this would be optimized at the mature stage of KM and that KM is included in their organizational strategic planning. However a simple interpretation of these results is not enough. The following figure requires some degree of serious contemplation. The results shown in Figure 6 are a three layer cross tabulation of: • • •
Realization of strategic benefits Governance authorities and KM Maturity
The results show organizations with mature KM programs of strategies that have realized strategic benefits from the development and implementation of those strategies. The reader should note that as this represents a section of the survey population the absolute numbers in each category are therefore limited. However these are indeed representations of 64 success stories. The results can effectively be interpreted as indicative factors in establishing for success at the various stages of KM maturity.
Does KM Governance = KM Success?
Figure 5. Governance authorities and the realization of strategic benefits
Figure 6. Governance authorities, KM maturity and the realization of strategic benefits
63
Does KM Governance = KM Success?
Where the organization is ‘just curious and exploring’ KM, those organizations that realize any benefits are those where KM is authorized by a CEO; a stakeholder group; or a CIO. Of the organizations implementing first KM initiatives or have implemented at least one KM initiative, then we see the same degree of involvement from CEOs but the sudden and strong appearance of the CKO’s appointment and involvement. We also see the involvement of some KM consultants and the HR department. Those organizations that have implemented several KM initiatives again show the involvement of a CEO in some organizations however, both the CKO and the CIO are prominent in their involvement. We see a continued involvement of stakeholder groups and of external consultants. Notably there is no presence of the HR entity. Where KM is operating organization wide CEO or Managing Director is again prominent in their involvement, while the CKO has an equal level of involvement with the stakeholder group. At the same time, we see the reappearance of the HR entity together with the CIO and consultants. Finally we examine those organizations where the development and implementation of KM is considered mature – where KM is included in strategic planning. In this scenario, it is the CKO who is most important. The CEO or the Managing Director are still strongly represented. The CIO is once again prominent while the Stakeholder group maintains a consistent presence. There is no evidence of consultants or the HR entity in mature KM implementation. We ask the reader to return to Figure 2 that examines the distribution of KM governance responsibilities, the reader can now draw out comparisons between their own organization and those successful KM organizations shown here. They can look at the maturity of their KM implementation, the involvement of staff, and the leadership and governance responsibilities: the development of KM policy to guide activity, the prioritization of KM needs, measurement
64
and evaluation, financial responsibility and the oversight of risk management for the KM strategy.
concluSion and future reSearcH This research confirms that the management of knowledge is considered a key organizational resource by many organizations across the globe. Further it point clearly to indicators that as the significance and consequential positive impact of KM becomes apparent, so too do the seniority of those with responsibility for KM. A majority of organizations resource their KM endeavor with senior executive staff’s organizational time and resources. It is yet to be established which is the cause and which is the effect in this scenario. Organizations invest considerable resources to support the growth and sustainability of KM. The greater the maturity of the KM program and levels of investment in that, the greater the likelihood that benefits will be actualized. KM maturity is represented in six stages likened to the KM maturity models of Ehms & Langen (2002) and Weerdmeester, Pocaterra, & Hefke (2003): ‘just curious and exploring’; ‘implementing first KM initiatives’; ‘have implemented at least one KM initiative’; ‘have implemented several KM initiatives’; ‘KM is operating organisation wide’; and that ‘we include management of knowledge in our strategic planning’. This research extends those models to include the element of strategic planning where KM is both derived from and is embedded into organizational strategy. It must also be said that such strategic planning can only be successful where it is made sustainable by the investment of resources in such growth. We have presented the roles and the responsibilities of 214 organizations, surveyed globally from March 2008 to the end of June 2008. From these we have drawn data from 64 success stories for interpretation, showing indicative factors in establishing leadership that can be cross-referenced
Does KM Governance = KM Success?
to governance responsibilities for success in the stages of maturity in KM implementations. Governance authority is qualitatively different from power endowed through organizational rank. It is clearly the centre of the KM decision-making authority, an executive framework to deliver the expected benefits of KM in a controlled manner. It operates through the establishment of checks and balances to strategically align activity, expressed as policy with the oversight of financial control, evaluation and risk. We have clearly confirmed that all authority roles proportionally demonstrate responsibilities that were hitherto only revealed in case study analysis, and are confirmed through this global survey population. We find that recognized and defined authority for KM leads to strategically aligned successful outcomes from investment in KM. We have shown that governance through assigned authority strongly contributes to successful management of organizational knowledge to support strategic organizational management. This research presents clear-cut and explicit findings of this research. Future research will raise expanded issues as both text-based and other statistical data are further examined.
referenceS ActKM discussion forum. (2006). On Governance. Retrieved 20 February, 2006, from http://www. actkm.org.au Braganza, A., Hackney, R., & Tanudjojo, S. (2007). Organizational knowledge transfer through creation, mobilization and diffusion: a case analysis of InTouch within Schlumberger. Information Systems Journal. Bryman, A., & Bell, E. (2007). Business research methods. Oxford, UK: Oxford University Press.
Chua, A., & Lam, W. (2005). Why KM projects fail: a multi-case analysis. Journal of Knowledge Management, 9(3), 6–17. Dalkir, K. (2005). Knowledge management in theory and practice. Boston: Elsevier. Butterworth: Heinemann. De Long, D. W., & Fahey, L. (2000). Diagnosing cultural barriers to knowledge management. The Academy of Management Executive. Ada, 14(4), 113–127. De Tienne, K. B., Dyer, G., Hoopes, C., & Harris, S. (2004). Toward a Model of Effective Knowledge Management and Directions for Future Research: Culture, Leadership and CKO’s. Journal of Leadership & Organizational Studies, 10(4), 26–43. Desouza, K. C., & Raider, J. J. (2006). Cutting corners: CKOs and knowledge management. Business Process Management Journal, 12(2). Dillman, D. A. (2000). Mail and Internet Surveys: The tailored design method (2nd ed.). New York: John Wiley & Sons, Inc. Ehms, K., & Langen, M. (2002). Holistic Development of Knowledge Management with KMMM. Retrieved 5 December, 2004, from www.knowledgeboard.com/cgi-bin/library. cgi?action=detail&id=5180 Henderson, J. C., & Venkatraman, H. (1999). Strategic alignment: Leveraging information technology for transforming organisations. IBM Systems Journal, 38(2/3), 472. International Standards Organization. (2006). ISO/ IEC 16085:2006: Systems and software engineering - Life cycle processes - Risk management. Geneva: International Standards Organization. Ionescu, L. M., Burstein, F., & Zyngier, S. (2006). Knowledge Management Strategies in Australia (No. 2006/01). Caulfield East: Knowledge Management Research Program, Monash University. Document Number.
65
Does KM Governance = KM Success?
IT Governance Institute. (2001). Board Briefing on IT Governance. Rolling Meadows, Il. Information Systems Audit and Control Foundation.
Mintzberg, H. (1994). The Fall and Rise of Strategic Planning. Harvard Business Review, (January-February): 107–114.
Jennex, M., & Zyngier, S. (2008). Security as a Contributor to Knowledge Management Success. Information Systems Frontiers: A Journal of Research and Innovation, 9(5), 493–504.
Mintzberg, H., Ahlstrand, B. W., & Lampel, J. (1998). Strategy Safari; the complete guide through the wilds of strategic management. London: Pearson Educational.
Kaen, F. R. (1995). Corporate finance: concepts and policies. Cambridge, MA: Blackwell Business.
Murray, P. (1998). The Cranfield/Information Strategy Knowledge Survey; Europe’s State of the Art in Knowledge Management. London: The Economist Group.
Kaplan, R. S., & Norton, D. P. (2001). Transforming the Balanced Scorecard from Performance Measurement to Strategic Management: Part 1. Accounting Horizons, 15(1), 87–104. Kelleher, D., & Courtney, N. (2003). PD 7502 Guide to Measurements in Knowledge Management. London: British Standards Institution. Kulkarni, U. R., Ravindran, S., & Freeze, R. (2006). A Knowledge Management Success Model: Theoretical Development and Empirical Validation. Journal of Management Information Systems, 23(3), 309–347.
Nonaka, I., & Takeuchi, H. (1995). The Knowledge-Creating Company: How Japanese Companies Create the Dynamics of Innovation. New York: Oxford University Press. Pfeffer, J. (2005). Understanding the Role of Power in Decision Making. In Shafritz, J. M., Ott, J. S., & Jang, Y. S. (Eds.), Classics of Organization Theory (6th ed., pp. 289–303). Belmont, CA: Thomson Wadsworth. Probst, G., Raub, S., & Romhardt, K. (2000). Managing Knowledge; Building Blocks for Success. Chichester, UK: John Wiley & Sons, Ltd.
Lakshman, C. (2007). Organizational knowledge leadership: a grounded theory approach. Leadership and Organization Development Journal, 28(1), 51–75.
Riege, A. (2005). Three-dozen knowledge-sharing barriers managers must consider. Journal of Knowledge Management, 9(3), 18–35.
Lavergne, R., & Earl, R. L. (2006). Knowledge Management: A Value Creation. Journal of Organizational Culture. Communication and Conflict, 10(2), 43–60.
Schultze, U., & Stabell, C. (2004). Knowing What You Don’t Know? Discourses and Contradictions in Knowledge Management. Journal of Management Studies, 41(4), 549–573.
Liebowitz, J., & Suen, C. Y. (2000). Developing knowledge management metrics for measuring intellectual capital. Journal of Intellectual Capital, 1(1), 54–67.
Smits, M., & deMoor, A. (2004). Measuring Knowledge Management Effectiveness in Communities of Practice. Paper presented at the 37th Hawaii International Conference on System Sciences, Hawaii.
McKay, J., & Marshall, P. (2004). Strategic Management of e-Business. New York: John Wiley & Sons.
66
Spender, J. C. (1996). Organizational knowledge, learning and memory: three concepts in search of a theory. Journal of Organizational Change Management, 9(1), 63–78.
Does KM Governance = KM Success?
Standards Australia. (2005). AS 5037-2005 Knowledge management - a guide (2nd ed.). Sydney: Standards Australia. Stehr, N. (2003). The Governance of Knowledge. New York: Transaction Publishers. Sun, P. Y.-T., & Scott, J. L. (2005). An investigation of barriers to knowledge transfer. Journal of Knowledge Management, 9(2), 75–90. Sveiby, K. E. (2001, July 2004). Methods for Measuring Intangible Assets. Retrieved August, 3, 2004, from http://wwwsveiby.com/articles/ IntangibleMethods.htm Swart, J., & Kinnie, N. (2003). Sharing knowledge in knowledge-intensive firms. Human Resource Management Journal, 13, 60. Tanudjojo, S., & Braganza, A. (2005). Overcoming Barriers To Knowledge Flow: Evidence-Based Attributes Enabling The Creation, Mobilization, and Diffusion of Knowledge. Paper presented at the 38th Hawaii International Conference on System Sciences (HICSS), Hawaii. Tiwana, A. (2002). The Knowledge Management Toolkit: Orchestrating IT, Strategy, and Knowledge Platforms (2nd ed.). Upper Saddle River, NJ: Prentice Hall. Van den Berghe, L., & De Ridder, L. (1999). International Standardisation of Good Corporate Governance: Best Practices for the Board of Directors. Boston: Kluwer Academic Publishers. Van Grembergen, W., De Haes, S., & Van Brempt, H. (2007). Strong Consensus on the Most Important Business and IT Goals. COBIT Focus, 3, 1–3. Weber, M. (2005). Bureaucracy. In Shafritz, J. M., Ott, J. S., & Jang, Y. S. (Eds.), Classics of Organization Theory (6th ed., pp. 73–78). Belmont, CA: Thomson Wadsworth.
Weerdmeester, R., Pocaterra, C., & Hefke, M. (2003). D 5.2. Knowledge Management Maturity Model (No. ST-2002-38513): INFORMATION SOCIETIES TECHNOLOGYo. Document Number. Weill, P., & Ross, J. (2004). IT governance: how top performers manage IT decision rights for superior results. Boston: Harvard Business School Press. Wiig, K. M. (2004). People-focused Knowledge Management: How Effective Decision Making Leads to Corporate Success. Burlington, MA: Elesevier Butterworth Heinemann. Zack, M. H., McKeen, J. D., & Singh, S. (2006). Knowledge Management and Organizational Performance: An Exploratory Survey. Paper presented at the Thirty-Ninth Hawaii International Conference on System Sciences, Hawaii. Zack, M. H., & Street, C. (2007, June). A Framework for Assessing the Impact of Knowledge on Firm Performance. Paper presented at The International Conference on Organizational Learning, Knowledge, and Capabilities, University of Western Ontario Zyngier, S. (2002). Knowledge Management Obstacles in Australia. Paper presented at the 10th European Conference on Information Systems, Gdan’sk, Poland. Zyngier, S. (2005). Knowledge Management Governance. In Schwarz, D. (Ed.), The Encyclopedia of Knowledge Management. Hershey, PA: Idea Group Publishing. Zyngier, S. (2006). The Role of Knowledge Management Governance in the Implementation of Strategy. Paper presented at the 39th. Hawaiian International Conference on System Sciences (HICSS39), Hawaii.
67
Does KM Governance = KM Success?
Zyngier, S. (2007). Knowledge Management Governance: a Framework for Knowledge Management Benefits Realization. Paper presented at the 8th International Research Conference on Quality, Innovation and Knowledge Management, New Delhi, India. Zyngier, S. (2008). Risk management: Strengthening Knowledge Management. International Journal of Knowledge Management, 4(3). Zyngier, S., & Burstein, F. (2009). Knowledge management governance as a mechanism for sustainable organizational knowledge creation and transfer. International Journal of Learning and Intellectual Capital, 6. Zyngier, S., Burstein, F., & McKay, J. (2004). Knowledge management governance: a multifaceted approach to organizational decision and innovation support. Paper presented at the Decision Support in an Uncertain World: Proceedings of the 2004 IFIP WG8.3 International Conference on Decision Support Systems (DSS2004), Prato, Italy. Zyngier, S., Burstein, F., & McKay, J. (2005). Governance of Strategies to Manage Organizational Knowledge - a mechanism to oversee knowledge needs. In Jennex, M. (Ed.), Knowledge Management Case Book. Hershey, PA: Idea Group Books. Zyngier, S., Burstein, F., & McKay, J. (2006). The Role of Knowledge Management Governance in the Implementation of Strategy. Paper presented at the 39th. Hawaiian International Conference on System Sciences (HICSS39), Hawaii. Zyngier, S., Burstein, F., & Rodriguez, M.-L. (2003). Knowledge Management Strategies in Australia. In Hasan, H., & Handzic, M. (Eds.), Australian Studies in Knowledge Management. Woolongong, Australia: University of Wollongong Press.
68
KeY terMS and definitionS Authority: An established power to enforce moral or legal decisions. Organizational authority is accountable for its actions. Authority is a right to demand and instruct subordinates. Authority may also be delegated or be derived from delegated control. The organization may mandate power to a role or position a group or individual in authority, or power may be assigned or sanctioned by consensus. Evaluation: The assessment of the effectiveness of service delivery and the identification of obstacles or barriers to service delivery. Some means of evaluation include understanding the perceptions of improvement in the organization in the manner in which it formalizes knowledge processes, knowledge structures and underlying systems. These in turn will affect operations, products or services delivered. Another means of evaluation of the effectiveness of a KM strategy is through establishing increased awareness and participation in that strategy. Governance: A process that is a framework of authority to ensure delivery of anticipated or predicted benefits of a service or process. The operationalization of the particular organizational strategy and is therefore executed in an authorized and regulated manner. Governance act to manage risk, evaluate and review strategic goal and objectives and exercise fiscal accountability to ensure the return on investment of those strategies. Measurement: Substantially a quantitative tool. It may rely on direct comparison of performance before and subsequent to the initiation and establishment of a KM strategy. The organization may choose to measure of its performance in market competitiveness and acceptance, it may look at the contribution of the KM strategy to financial benefits and viability. It can also measure contributions to and the growth in the volume of explicit knowledge content stored and used by staff. Some knowledge managers may regard the increase in the resources attached to the project as a measure
Does KM Governance = KM Success?
of the acceptance and hence the understanding of the value of KM to their organization. Return on Investment (ROI): Commonly used as an accounting term to indicate how well an organization has used its investment in resources. In a knowledge management context, ROI describes the return on both the human and financial capital invested in that strategy. Some measures may include sustainable growth, calculable efficiencies in product development cycles; improved decision-making; better ability initiate
and integrate new employees; lower rates of staff turnover reflecting improved employee morale; better ability to retain customers reflecting trust in employees’ expertise Risk Management: A tactic to minimize the susceptibility of the KM strategy to risk and subsequent failure or ineffectiveness. Risk must be analyzed to assess the potential exposure to the chance of human or infrastructural barriers. Risk may also threaten operational or financial elements of the strategy.
69
Does KM Governance = KM Success?
aPPendix a: SurVeY queStionS 1.
2.
3. 4. 5. 6. 7. 8.
70
There are a number of definitions of knowledge management listed below. Please indicate the one that most closely reflects your organization’s interpretation. ◦ A technological concept - the use of information technology to capture data and information in order to manage knowledge ◦ A business focused approach - the collection of processes that govern the creation, dissemination, and utilisation of knowledge to fulfil organisational objectives ◦ About intellectual assets - taking the form of documents and information bases ◦ The capability to create, store, retrieve and to apply knowledge Select the reason/s that knowledge is important to your business ◦ Gaining competitive advantage ◦ Growing revenue ◦ Growing profits ◦ Improving market share ◦ Instigating change ◦ Identifying new markets ◦ Developing new product/services ◦ Improving efficiency ◦ Improving effectiveness ◦ Other (please specify) What is your organization’s current stage in KM? How long has your organization been involved with formal strategies to manage knowledge? The organization’s KM program of strategies is specifically designed to contribute to achieving business goals Since 2003 your organization’s budget for knowledge and learning activities has proportionately: Grown; Remained about the same; Don’t know In the last 5 years that the number of knowledge workers in your organization has: Grown; Remained about the same; Don’t know Please indicate all the KM techniques used in your organization: ◦ Facilitated networking ◦ Social network analysis ◦ Peer to peer knowledge sharing ◦ Communities of Practice – face to face and virtual ◦ Best practice replication ◦ Organisational learning programs ◦ Innovation support ◦ Strategies to protect Intellectual Property ◦ Strategic information management ◦ Storytelling ◦ After action review ◦ Knowledge brokers ◦ Narrative
Does KM Governance = KM Success?
9.
10.
11. 12.
13.
Please indicate all the KM tools used in your organization: ◦ Video-conferencing ◦ Groupware (e.g. Lotus Notes) ◦ Online forums or list-serv ◦ Intranet ◦ Portal ◦ Expert systems ◦ Search and retrieval agents ◦ Data warehousing & Data mining ◦ Document repositories ◦ Document management systems ◦ Instant messaging ◦ Wikis ◦ Blogs Who has authority for the KM strategy? ◦ CEO / Managing Director ◦ Chief Knowledge / IP / Learning Officer ◦ Chief Information Officer ◦ A stakeholder group ◦ Director of HR ◦ Consultant ◦ No-one Is there a formal terms of reference or position description for this authority What does the authority cover? ◦ KM Policy development ◦ Developing terms of reference ◦ Authorizing strategy ◦ Oversight of risk management ◦ Oversight of financial management and allocation of resources ◦ Review and revision of policy ◦ Prioritization of strategic knowledge needs Who is responsible for planning and development of KM strategy? ◦ CEO / Managing Director ◦ Chief Knowledge Officer ◦ Chief Learning Officer ◦ Chief Information Officer ◦ Director of HR ◦ A department/function ◦ Consultant ◦ Informal cross-functional team ◦ Formal cross-functional team ◦ No formal role exists
71
Does KM Governance = KM Success?
14. What is involved in KM strategy development? ◦ Build ways to leverage explicit knowledge ◦ Build ways to leverage tacit knowledge ◦ Map/Audit knowledge resources ◦ Identify and or locate knowledge and information ◦ Selection of KM techniques ◦ Selection of KM tools ◦ Measure or evaluate effectiveness of KM ◦ Financial management ◦ Analyze and remedy risks to KM strategies ◦ Benchmark best practices 15. Who is responsible for the implementation of KM strategy? ◦ CEO / Managing Director ◦ Chief Knowledge Officer ◦ Chief Learning Officer ◦ Chief Information Officer ◦ Director of HR ◦ A department/function ◦ Consultant ◦ Informal cross-functional team ◦ Formal cross-functional team ◦ No formal role exists 16. What is involved in KM strategy implementation? ◦ Collect or gather knowledge and information ◦ Organise knowledge ◦ Implement learning strategies ◦ Broker or distribute knowledge ◦ Facilitate Communities of Practice ◦ Roll out and implement KM tools ◦ Best practice replication ◦ Facilitate peer to peer knowledge sharing ◦ Mentoring, training, coaching ◦ Measure or evaluate effectiveness of KM 17. Is there a relationship between Corporate Governance and KM? 18. Reporting relationships – please indicate role that applies ◦ Those with authority for KM report to: ◦ Those who are responsible for development of KM strategy report to: ◦ Those who are responsible for implementation of KM strategy report to 19. Has the development and implementation of your KM strategy realized strategic benefits? 20. Please describe any contextual issues or obstacles you feel are fundamental to moving knowledge management forward in your organization. 21. Background Information: What is your role? What industry sector is your organization in? 22. Is your organization: Listed on a stock exchange; privately owned; Government sector; Semi Government sector; Not-for Profit organization
72
Does KM Governance = KM Success?
23. How many employees in your organization? 24. How many locations does your organization have? 25. Employment: How long have you worked in your organization? How long have you been in your current position? 26. What is your highest level of education? 27. What is your age group?
73
74
Chapter 5
An Evaluation of Factors that Influence the Success of Knowledge Management Practices in US Federal Agencies Elsa Rhoads The George Washington University, Institute of Knowledge & Innovation, USA Kevin J. O’Sullivan New York Institute of Technology, USA Michael Stankosky The George Washington University, USA
abStract This chapter investigates the status of knowledge management practices implemented across federal agencies of the U.S. government. It analyzes the extent to which this status is influenced by the size of the agency, whether or not the agency type is a Cabinet-level Department or Independent Agency, the longevity of KM Practices implemented in the agency, whether or not the agency has adopted a written KM policy or strategy, and whether the primary responsibility for KM Practices in the agency is directed by a CKO or KM unit versus other functional locations in the agency. The research also tests for possible KM practitioner bias, since the survey was directed to members of the Knowledge Management Working Group of the Federal CIO Council and KM practitioners in federal agencies.
introduction The implementation of knowledge management practices has been underway in both the public and private sectors for many years. For the federal government this transition was well underway
prior to the devastating events of September 11th, 2001 (9/11). However those events increased the awareness of the value and importance of the government’s stewardship of its knowledge. In fact, the 9/11 terrorist attack on the World Trade Center in New York City is considered by many to have been a ‘wake-up call’ for federal agencies
DOI: 10.4018/978-1-60566-709-6.ch005
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
An Evaluation of Factors that Influence the Success of Knowledge Management Practices
to make both policy and process changes in order to prevent future attacks. Knowledge management programs concentrate on managing and distributing what the government knows within and between agencies for the purpose of taking collaborative action. The basic tenet of knowledge management is that the right knowledge needs to be made available to the right people at the right time for the purpose of taking concerted action. The most important role of the federal government is unarguably to protect its citizens from harm, and specifically from terrorist threats. As a result of 9/11, President George W. Bush, upon a recommendation of the 9/11 Commission, (National Commission on Terrorist Attacks Upon the United States, created in November 2002) began to rectify the gap in sharing knowledge and coordinating action by creating the Department of Homeland Security (DHS). Twenty-two different agencies with a total of 180,000 employees were reorganized into a single agency for the purpose of preventing terrorist attacks and protecting citizens and infrastructure from threats and hazards. The intentional sharing of knowledge on the part of federal agencies is the new paradigm, albeit one in transition. The major objective is to ensure that the government knows what it needs to know, when it needs to know it. Deployment of knowledge management programs in U.S. federal agencies has been hampered by two distinct conditions: 1.
2.
Long-established hierarchical “commandand-control” management styles and bureaucratic organizational structures make it challenging for agencies to share knowledge through either intra-agency collaboration, and much less through cross-agency or inter-agency collaboration. Agency Information Technology (IT) systems are a mixture of legacy systems cobbled together with newer systems and technologies, making interoperability a
technically difficult impediment both within and between different agencies. The management of the government’s knowledge is also made difficult by the vast amount of data and information contained in its repositories. In addition, the government’s knowledge is comprised of the working knowledge in the minds of approximately 1,800,000 federal employees (OPM Fedscope, 2004) To manage this bewildering resource of both explicit and tacit knowledge and harness its capabilities is enormously demanding. Much of the knowledge in government organizations, and certainly within a constituency base, is tacit in nature, that is, knowledge that cannot be easily articulated and thus exists in people’s hands and minds, and only manifests itself through their actions. (Stenmark, 2001) (Koh, Ryan & Prybutok, 2005) A further problem is that the management of knowledge can be executed in many forms, but it is most useful to agencies when these forms are developed to fit specific agency objectives. This immediate utility is what gives knowledge its value to each agency. However, these unique uses and designs are what make it difficult to share knowledge across agencies. Much research has been pursued in the area of knowledge management, in which knowledge management initiatives were internally focused, and principally aimed at collaboration and knowledge sharing among employees (Almashari, Zairi & Alathari, 2002), (Henderson & Venkatraman, 1993) (Lai & Chu, 2003), (Liebowitz, 2003-2004) & (Koh, Ryan & Prybutok, 2005) Unfortunately, there has been mixed comprehensive research into the value proposition of applying knowledge management practices to achieve improvements in productivity either within a single federal agency, or through the transfer of knowledge between agencies to serve common customers. The focus of this chapter and our research has been to answer the following question:
75
An Evaluation of Factors that Influence the Success of Knowledge Management Practices
What factors influence the success of knowledge management practices within the U.S. federal government? To answer this question, a survey of KM practitioners in federal agencies, members of the Knowledge Management Working Group (KM WG) of the Federal CIO Council, (Chief Information Officers), was conducted in 2005. The current website of the Federal Knowledge Management Working Group is located at http:// km.gov. The survey identified the status of knowledge management programs in federal agencies and examined the extent to which this status was influenced by the size of the agency, whether or not the agency type was a Cabinet-level Department or Independent Agency, the longevity of established KM Practices in the agency, whether or not the agency had adopted an effective KM policy or strategy, and whether the primary responsibility for KM Practices was directed by a CKO or KM unit versus another type of functional unit in the agency. The question of the “success” of KM Practices is answered by the fact that we now have benchmark data on the KM Practices established in individual U.S. federal agencies, resulting from credible sources from the responses of the KM practitioners in these agencies themselves. This survey has obtained the first baseline data on this subject.
reSearcH bacKground One of the most inhibiting and intransigent barriers contributing to the lack of knowledge transfer within and across federal agencies is the lingering presence and influence of the historical culture of organizational bureaucracy that is built into federal organizations. It is a hierarchical approach to management, more appropriate for the Industrial Age, in contrast to the practical and intentional establishment of collaborative working
76
relationships between employees from different operational entities more suitable to the Knowledge Age of the twenty-first century. Employees must be prepared to work across the independent silos of agency operations to bring their collective knowledge to bear on the most demanding issues facing the government, in times of normalcy as well as in emergencies. In many European countries, the central government establishes knowledge management planning and implementation for the whole country through a central administration, and this acts as an effective mandate for knowledge management within and between governmental bodies in the country. Many of these countries are members of the Organization for Economic Cooperation and Development (OECD). The OECD promotes knowledge management, and for that reason a separate discussion of the OECD’s role is included in this chapter. In the United States, there is no comparable centrally administered mandate for the adoption of knowledge management programs in U.S. federal agencies. The Office of Management and Budget (OMB) reporting to the President, mandates agency commitment to adopt an E-Government approach to provide electronic services to the U.S. public, in response to the President’s Management Agenda (PMA) adopted in 2002. The PMA contains the principles for the President’s vision for government reform: to citizen-centered, not bureaucracy-centered, results-oriented and market-based agency organizations. There is no concomitant mandate for the intentional transfer of knowledge through the implementation of knowledge management programs within and between agencies. The General Accountability Office (GAO), an independent non-partisan agency reporting to Congressional policy-makers and the public under the leadership of the U.S. Comptroller General (who holds the position for a period of 15 years) with the authority to improve the performance
An Evaluation of Factors that Influence the Success of Knowledge Management Practices
and ensure the accountability of federal government programs. In 1998, David M. Walker was appointed Comptroller General by President William J. Clinton. At the latter part of his career at the GAO, his work led him to conduct a series of “wake-up” tours around the country to warn about the worsening fiscal condition of the federal government. In March 2008 he joined the Peter G. Peterson Foundation in New York as President and CEO. He recently shared his knowledge in a new book titled “Comeback America: Turning the Country Around and Restoring Fiscal Responsibility”. While the GAO promotes knowledge management, and embraces it internally, it does not mandate knowledge management. However, the specific use of knowledge management for its transformational effect on organizations has received longstanding support (GAO, 2004). One of the stated goals of the GAO’s Strategic Plan is to transform the federal government’s role and how it does business to meet twenty-first century challenges. The GAO considers transformation as the key to achieving a new model of management for government organizations. In 2005, the GAO designated a new “high risk” watch area for federal agencies – that of establishing appropriate and effective information sharing mechanisms, citing the need for securing the homeland in a government-wide effort involving multiple federal agencies. (GAO, 2005). In December 2005, the 9/11 Commission remonstrated that “The failure to share information among and within agencies cost us dearly on September 11th”, and concluded “No single step is more important to strengthen our intelligence than to improve information sharing.” (9/11 Commission Report Card, 2005). Public Sector Governance: Traditionally, public administrations are bureaucratic organizations: an operational definition that gives a better understanding of the difficulties of bringing about change (OECD, 2000). The office organization is a collective order, a legitimate domination based
on a set of procedures, a professional organization based on process. The production of services in a bureaucratic organization follows the concepts of specialization and sequencing of tasks. The advantage to employees for this segmented, bureaucratic work style is that there was no requirement for employees to collaborate with others (Dupuy, OECD, 2000). From the other side of the equation, to receive services, the public had no choice but to follow the sequential steps imposed by the organization’s operational procedures. This is the antithesis of today’s customer-service orientation between the government as provider and the public as customer. Sistare (2004) describes the concept of achieving government transformation and reorganization for the twenty-first century through means of a “virtual reorganization”. This has become increasingly possible, due to the growth of the Internet. There are four possible paths to achieve the virtual reorganization of federal agencies in lieu of a physical reorganization: virtual reorganization through e-Government (firstgov.gov); virtual reorganization through coordinating councils (Council of Chief Information Officers); reorganization by commission (the 9/11 Commission); and reorganization via legislative authorization (the forming of DHS). (Sistare, 2004). An effective implementation of knowledge management to achieve electronic government requires that knowledge be managed horizontally across agencies. Citizens are not cognizant about where or how the government information they require is created, or whether the information they seek needs to be aggregated by federal agencies to provide the ultimate service. To effectively meet these objectives requires that knowledge be integrated between independent segments of common service functions across government. Even though the federal government is organized vertically, with each department and agency serving the public directly, much of what federal agencies do to effectively distribute “what it knows” to improve public services is achieved by sharing knowledge
77
An Evaluation of Factors that Influence the Success of Knowledge Management Practices
through horizontal partnerships. Government agencies are vertical bureaucracies (federal, state and local) that are inherently knowledge-intensive (Barquin, Bennet, & Remez, 2001). Knowledge management requires leveraging the collective knowledge of agencies to fulfill the mission of the overall federal enterprise (Barquin, Bennet, & Remez, 2001). Knowledge Management Practices and the OECD: Headquartered in Paris, the Organization of Economic Cooperation and Development (OECD) provides a forum where the governments of 30 industrialized countries, with democratic governments, work together to solve the common economic, social and governance challenges of the member countries. Knowledge management forms the central core of OECD focus. In January 2002, the OECD launched the first international survey of knowledge management practices for ministries/departments or agencies of central government in OECD member countries. A comparison of seven functional sectors of central governments was obtained through a survey of 20 participating members. These functions were the ministries of Economy, Trade and Industry, Education, Finance/Budget, Foreign Affairs, Health/ Social Affairs, Home Affairs/Interior, and State Reform/Civil Service/Public Administration. The United States, a member of the OECD, submitted responses to the survey in all except the Finance/ Budget sector. The broad conclusions of the OECD KM Practices Survey were that within the central government in OECD countries, activities are knowledge-intensive, staff is highly educated, a critical mass of knowledge exists in these public organizations, and central governments must have superior mechanisms with which to share knowledge across government organizations. The OECD survey was designed to review the actual KM Practices implemented, as well as to elicit a self-assessed perception of the results of these practices.
78
One of the significant results of the survey was that there is a need to think about knowledge management from a ‘whole of government’perspective rather than from the perspective of individual organizations within central government. (OECD, GOV/PUMA, 2003. This is a key difference from the perspective of how knowledge management programs are implemented in the U.S. federal government. While they may be adopted within individual departments or agencies, they are not directed for adoption for the federal government as a whole.
reSearcH deSign and MetHodologY The research methodology was designed to investigate the factors affecting success in U.S. federal agency’s adoption of Knowledge Management Practices in Cabinet-level Departments and Independent Agencies reporting to the President. The Bureaus and Program Offices of the large Cabinet-level Departments are comprised of approximately 130 organizations. Research Goal: The central research goal was to examine the influence of five key factors in the success of knowledge management practices within the federal government. Research Hypothesis: Five research hypotheses were developed to address this research goal. HS: Small federal agencies have higher KM Practices Index Scores than large agencies. HI: Independent agencies have higher KM Practices Index Scores than Cabinet agencies. HL: Agencies where KM has been in place for more than 4 years have higher KM Practices Index Scores than agencies where KM has been in place for less than or equal to 4 years. HP: Agencies with a commitment to an effective written KM policy or strategy have higher KM Practices Index Scores than agencies with no effective written KM policy or strategy.
An Evaluation of Factors that Influence the Success of Knowledge Management Practices
Table 1. Survey research population The 16 U.S. Federal Government Cabinet-level Departments Agriculture
Housing & Urban Development
Commerce
Interior
Defense
Justice
Education
Labor
Energy
State
Environmental Protection Agency
Transportation
Health and Human Services
Treasury
Office of Homeland Security
Veterans Administration
The 10 U.S. Federal Government Independent Agencies Agency for International Development
Office of Management and Budget
Army Corps of Engineers
Office of Personnel Management
General Services Administration
Small Business Administration
National Aeronautics and Space Administration
Smithsonian Institute
National Science Foundation
Social Security Administration
HR: Agencies where the KM responsibility is assigned to a KM unit have higher KM Practices Index Scores than agencies where the KM responsibility is assigned to a different department. Each of these hypotheses has an associated null hypothesis. The dependent variable for this study is an index score of Knowledge Management Practices in federal agencies. The independent variables are the size of the agency, whether or not the agency type is a Cabinet-level Department or Independent Agency, the longevity of KM Practices, whether or not there is a commitment to adopt a KM policy or strategy, and whether the primary responsibility for KM Practices is directed by a CKO or KM unit versus other functional locations in the agency. The KM Practices survey questions for this research are drawn from both the Statistics Canada KM Practices survey conducted in 2001, and from the first international KM Practices survey conducted by the OECD in 2002. Previous to the KM Practices survey of U.S. federal agencies, Belgium, Canada, Denmark, Finland, France, Germany, Greece, Hungary, Iceland, Ireland, Korea, Norway, Poland, Portugal, Slovak Republic,
Sweden, United Kingdom and the United States participated in the OECD Knowledge Management Practices survey. This survey was limited to seven common functional government areas of operation in each country. The KM Practices survey of U.S. federal agencies was conducted in March 2005. Research Population: The survey research targeted the population of the 16 Cabinet-level Departments and the 10 Independent Agencies of the federal government listed in Table 1. Research Instrument: The KM Practices research instrument was used to determine the extent of KM practices implemented in the 26 federal departments and agencies and the type of practices most frequently employed. It consisted of 45 questions in three sections: 27 questions about specific Knowledge Management Practices; 11 questions regarding the perception of effective results from the use of KM Practices; and 6 mixed questions. Validation and Reliability: The survey instrument was based on the previous KM Practices surveys conducted by Statistics Canada, Denmark, France, Germany, and the OECD. Our Web-based
79
An Evaluation of Factors that Influence the Success of Knowledge Management Practices
online survey was tested, and feedback was received from a Survey Special Interest Group (SIG) of members of the KM WG interested in taking the survey in order to provide feedback prior to the survey distribution. Feedback from these members indicated a concern for the time required to take the survey, and adjustments were made. Research Procedures: Procedures recommended by Statistics Canada were followed in the administration of the research instrument. In addition, the recommendation of the central government office in Germany that piloted a Knowledge Management Practices survey to utilize a Likert scale was followed. The Canadian survey used a predictive scale that asked whether the respondent had implemented the KM Practices within the past 24 months, or whether they were considering an implementation within the next 24 months. In January 2005, in a review of the survey prior to its distribution, Statistics Canada advised the use of a four-ratio Likert scale instead of a five-ratio scale.
data collection and analYSiS The survey was distributed in the first week of March 2005. It remained open for 6 weeks and was closed in April 2005. The online survey was a blind survey, ensuring that no individual names were attributable to the information collected from each agency. The survey was distributed by a survey company via an e-mail code to 326 KM practitioners, employees of U.S. federal agencies and members of the KM WG of the Federal CIO Council. After six weeks, the total count of survey responses received from 26 different agencies was 125, or 38% of the members of the KM WG. Of the 125 responses, 119 were included in the analysis, with 6 removed because of incomplete or missing data. Validation of the Survey Instrument: The reliability of our survey tool was assessed after the
80
initial review of the data analysis of the descriptive statistics concerning the respondent profiles and prior to considering the validation of the research hypotheses. An analysis of the normal distribution of the variables was performed before examining the possible variance analyses. A descriptive analysis was conducted in order to verify the kurtosis and skewness values of the various factors. It is generally accepted that variables obtaining an absolute value lower than 2 are acceptable (George & Mallery, 2005). In order to assess the discriminant and convergent validity, various factor analyses were conducted using the principal component method with a Varimax rotation and a Kaiser normalization. Only factors obtaining an eigenvalue greater than 1 were extracted. The Cronbach’s alpha test was used to test the internal reliability of the KM Practices construct as well as its five dimensions. The overall alpha value for the KM practices is α=0.941, which reflects an excellent level of reliability. All the alpha values are greater than 0.7, which denote an acceptable level of internal reliability of the KM Practices construct. Data Analysis: The KM Index Score for each agency was evaluated under 5 dimensions, as a result of the component factor analysis of the 27 KM Practice questions. Each dimension was rated separately, and is described in Table 2. In order to increase the validity and the reliability of our analysis it was decided to include only the agencies where more than 1 respondent responded to the survey (frequency ≥2). 1. Agency Size Influence. Hypothesis HS tests the difference between the KM Practices Index Score (dependent variable) associated with the size of the federal department or agency (independent variable). HS: Small federal agencies have higher KM Practices Index Scores than large agencies. Table 3 shows the size of each agency responding to the survey. While not a federal agency, The World Bank was included in the survey as a public sector organization with a long and close
An Evaluation of Factors that Influence the Success of Knowledge Management Practices
Table 2. The KM Index Score Dimensions Index Dimensions
Description
Knowledge Engagement
Agency implementation of KM Practices through KM policy, strategy as responsibility of a CKO or KM unit; including formal and informal training in Knowledge Management Practices
Knowledge Exchange
Agency value system conducive to promote knowledge-sharing; improve workforce retention; monetary or non-monetary incentives; capture of Best Practices and Lessons Learned; SME locators; portal for shared documents; submit best practices
Knowledge Acquisition
Partnerships/alliances to acquire knowledge; captures external knowledge; develop CoPs; transfer knowledge to less-experienced workers
KM Responsibility
Responsibility of managers/executives; responsibility of non-management workers
KM Training and Mentoring
Formal and informal mentoring; funding for work-related courses; funding for KM study
Table 3. Agency Size Agency
Frequency
Percent
Size
Cabinet Independent
Department of Veterans Affairs
VA
4
3%
236,495
Cabinet
Department of the Army
US Army
9
7%
230,496
Cabinet
Department of the Navy (incl. US Marine Corps)
US Navy
13
10%
174,350
Cabinet
Department of the Air Force
USAF
3
2%
154,999
Cabinet
Department of the Treasury
TREAS
6
5%
122,521
Cabinet
Department of Agriculture
USDA
8
6%
101,472
Cabinet
Department of Defense Civilian
DOD
13
10%
98,663
Cabinet
Department of Health & Human Services
HHS
2
2%
63,514
Cabinet
Department of Transportation
DOT
9
7%
55,611
Cabinet
Army Corps of Engineers
USACE
5
4%
35,250
Independent
Environmental Protection Agency
EPA
2
2%
18,452
Independent
Department of Energy
DOE
11
9%
14,990
Cabinet
General Services Administration
GSA
5
4%
12,472
Independent
The World Bank
WB
2
2%
9,300
Special
Government Printing Office
GPO
2
2%
2,395
Independent
United States Agency for International Development
USAID
7
6%
2,317
Independent
Pension Benefit Guaranty Corporation
PBGC
10
8%
780
Special
affiliation with the Federal Knowledge Management Working Group. The KM Practices Index Score variable was measured on an interval/ratio scale of values ranging from (5-20). Since most agencies were large-sized, we used the median value of the
agency size (45,431) in order to differentiate small versus large agencies. Agencies having a size lower than or equal to 45,431 employees were categorized as “small” and the others as “large”. An independent-sample one-tailed t test was used to analyze the differences of means between the
81
An Evaluation of Factors that Influence the Success of Knowledge Management Practices
Table 4. Descriptive Statistics of the 2 Groups Studied Group Statistics KM Index Score
Size_S_L
N
Mean
Std. Deviation
Std. Error Mean
Small Large
33
13.1192
2.03984
.35509
53
11.2106
3.37009
.46292
Mean
Std. Deviation
Std. Error Mean
Table 5. Descriptive Statistics of the 2 Groups Studied Group Statistics KMIndexScore
Agency type
N
Cabinet
75
11.3290
3.21994
.37181
Independent
32
12.6541
3.00754
.53166
Figure 1. Comparison between the KM Index Score of Small and Large Agencies
two groups (small and large). Table 4 provides the descriptive statistics of the two groups studied. The probability associated with the Levene’s test for equality of variance is 0.012 (Table 5, row (1)). Because this is less than .05, we can be reasonably certain that the variance of the KM Index score differs across the two groups. Data from the second row of Table 5 is used (equal variance not assumed). Applying the two-step rule, p=0.001 (one tailed) (lower than the pre-set α of .05) and directionality of the difference in sample means is consistent with HS (Small 13.12 > 11.21 Large). Thus, H0 is rejected and HS is accepted. We can be reasonably certain that small agencies have higher KM Practices Index Scores than large agencies.
82
2. Cabinet Agencies vs. Independent Agencies Influence: Hypothesis HI tests the difference between the KM Practices Index Score (dependent variable) associated with Cabinet-level and Independent Agencies (independent variable). HI: Independent agencies have higher KM Practices Index Scores than Cabinet agencies. The KM Practices Index Score variable was measured on an interval/ratio scale of values ranging from (5-20). An independent-sample one-tailed t test was used to analyze the differences of means between the two groups (cabinet and independent). Table 5 provides descriptive statistics of the two groups studied. The probability associated with the Levene’s test for equality of variance is 0.594 (Table 6, row (1)). Because this is more than .05, there is no reasonable certainty that the variance of the KM
An Evaluation of Factors that Influence the Success of Knowledge Management Practices
Table 6. Comparison between the KM Index Score of Cabinet-level and Independent Agencies Independent Samples Test Levene’s Test for Equality of Variances F
KMIndexScore
Equal variances assumed Equal variances not assumed
.286
Sig.
.594
t-test for Equality of Means
t
df
Sig. (2-tailed)
Mean Difference
Std. Error Difference
95% Confidence Interval of the Difference Lower
Upper
-1.987
105
.050
-1.32516
.66696
-2.64761
-.00271
-2.043
62.476
.045
-1.32516
.64877
-2.62184
-.02848
Table 7. Longevity of Agency KM Initiatives Frequency Valid
Percent
Valid Percent
Cumulative Percent
< 2 years
38
31.9
31.9
31.9
2-4 years
36
30.3
30.3
62.2
5-9 years
15
12.6
12.6
74.8
> 10 years Don’t know! Total
8
6.7
6.7
81.5
22
18.5
18.5
100.0
119
100.0
100.0
Index score differs across the two groups. Data from the first row of Table 6 will be used (equal variances assumed). Applying the two-step rule, p=0.025 (one tailed) (lower than the pre-set α of .05) and directionality of the difference in sample means is consistent with HI (Independent 12.65 > 11.33 Cabinet). Thus, H0 is rejected and HI is accepted. It is reasonably certain that Independent agencies have higher KM Practices Index Scores than Cabinet-level agencies. 3. KM Longevity Influence: Hypothesis HL tests the difference between the KM Practices Index Scores (dependent variable) associated with the longevity of the KM Practices (how long the KM practices have been in place in the organization) (independent variable). HL: Agen-
cies where KM has been in place for more than 4 years have higher KM Practices Index Scores than agencies where KM has been in place for less than or equal to 4 years. Table 7 illustrates that in our survey population, 62.2% or more that half of the agencies have had KM Practices implemented for a period of 2 to 4 years or less. The KM Practices Index Score variable was measured on an interval/ratio scale of values ranging from (5-20). The longevity variable is a discrete categorical variable (Less than 2 years, 2-4 years, 5-9 years, more than 10 years). A oneway Analysis of Variance (ANOVA) test was performed to analyze the differences of means between the various groups. Table 8 provides descriptive statistics of the various groups studied.
83
An Evaluation of Factors that Influence the Success of Knowledge Management Practices
Table 8. Descriptive Statistics of the Different Groups Studied N
Mean
Std. Deviation
95% Confidence Interval for Mean
Std. Error
Lower Bound < 2 years
33
11.1827
2.54444
.44293
Minimum
Maximum
5.00
16.75
Upper Bound
10.2805
12.0849
2-4 years
32
12.1618
3.04531
.53834
11.0638
13.2597
5.00
17.32
5-9 years
14
13.7454
3.08852
.82544
11.9622
15.5287
9.05
18.63
> 10 years
8
13.3135
2.72521
.96351
11.0352
15.5919
9.40
18.88
87
12.1511
2.94919
.31619
11.5226
12.7797
5.00
18.88
Total
The probability associated with the Levene’s test for equality of variance is 0.743 (Table 9). Because this is more than .05, we can not be reasonably certain that the variance of the KM Index score differs across the different groups (equal variances assumed). The results of the ANOVA test can be found in Table 10. The significance value p=0.028 is lower than the pre-set α of .05, which indicates that we can be reasonably certain that significant differences exist in KM Index Scores among the various groups studied. In order to test our hypothesis we made a more precise comparison between the groups “<2 years” ”2-4 years” and the other groups. A pairwise multiple comparison test was performed and the result of the contrast test is displayed in Table 11. The significance value p=0.012 is lower than the pre-set α of .05. Thus, H0 is rejected and HL is accepted. We can be reasonably certain that agencies where KM has been in place for more than 4 years have higher KM Practices Index Scores than agencies where KM has been in place for less than or equal to 4 years. 4. KM Policy Influence: Hypothesis HP tests the difference between the KM Practices Index Score (dependent variable) associated with agencies that have adopted an effective written KM policy or strategy and the agencies with no written KM policy or strategy (independent variable). HP: Agencies with an effective written KM policy or strategy have higher KM Practices Index Scores
84
Table 9. Test of Homogeneity of Variances KMIndexScore Levene Statistic .743
df1
df2 3
Sig. 83
.529
than agencies with no effective written KM policy or strategy The KM Practices Index Score variable was measured on an interval/ratio scale of values ranging from (5-20). An independent-sample onetailed t test was used to analyze the differences of means between the two groups (KM policy and no KM policy). Table 12 provides descriptive statistics of the two groups studied. The probability associated with the Levene’s test for equality of variance is 0.620 (Table 13 row (1)). Because this is more than .05, we can not be reasonably certain that the variance of the KM Index score differs across the two groups. Data from the first row of Table 12 will be used (equal variances assumed). Applying the two-step rule, p<0.000 (one tailed) (lower than the pre-set α of .05) and directionality of the difference in sample means is consistent with HP (Policy 14.24 > 10.34 No policy). Thus, H0 is rejected and HP is accepted. We can be reasonably certain that Agencies which have adopted an effective written KM policy or strategy have higher KM Practices Index Scores
An Evaluation of Factors that Influence the Success of Knowledge Management Practices
Table 10. Result of ANOVA Test ANOVA KMIndexScore Sum of Squares Between Groups
Df
Mean Square
F
77.348
3
25.783
Within Groups
670.659
83
8.080
Total
748.006
86
Sig. .028
3.191
Table 11. Contrast Test Results Contrast Tests Contrast KMIndexScore
Value of Contrast
Std. Error
t
df
Sig. (2-tailed)
Assume equal variances
1
-3.7145
1.44380
-2.573
83
.012
Does not assume equal
1
-3.7145
1.44765
-2.566
26.987
.016
Table 12. Descriptive Statistics of the 2 Groups Studied Group Statistics KM policy in place KMIndexScore
N
Mean
Std. Error Mean
Std. Deviation
KM policy or strategy
38
14.2423
2.37113
.38465
No KM policy or strategy
69
10.3391
2.72698
.32829
Table 13. Comparison Between The KM Index Score of Agencies with or without A KM Policy or Strategy Independent Samples Test Levene’s Test for Equality of Variances
KMIndexScore
Equal variances assumed Equal variances not assumed
t-test for Equality of Means
F
Sig.
t
.248
.620
7.411
7.718
Sig. (2-tailed)
Mean Difference
Lower
Upper
105
.000
3.90320
.52667
2.84891
4.94749
85.773
.000
3.90320
.50570
2.89787
4.90852
df
than agencies with no effective written KM policy or strategy.
Std. Error Difference
95% Confidence Interval of the Difference
5. KM Responsibility Influence: Hypothesis HR tests the difference between the KM Prac-
85
An Evaluation of Factors that Influence the Success of Knowledge Management Practices
Table 14. Descriptive Statistics of the Different Groups Studied Descriptives KMIndexScore 95% Confidence Interval for Mean N
Mean
Std. Deviation
Std. Error
Lower Bound
Upper Bound
Minimum
Maximum
HR
8
12.1220
2.38630
.84368
10.1270
14.1170
9.60
16.33
IT
17
12.6078
2.72768
.66156
11.2054
14.0103
9.23
18.88
KM Unit
22
12.8744
2.86626
.61109
11.6035
14.1452
6.25
16.98
Library
2
8.8708
2.33934
1.65417
-12.1473
29.8890
7.22
10.53
Executive
11
12.9644
2.92186
.88097
11.0015
14.9273
7.63
18.63
Grass-Roots
26
10.4692
3.23647
.63472
9.1619
11.7764
5.00
17.63
Total
86
11.9430
3.06369
.33067
11.2861
12.5998
5.00
18.88
Table 15. Test of Homogeneity of Variances Test of Homogeneity of Variances KMIndexScore Levene Statistic
df1 .308
tices Index Scores (dependent variable) and the functional area with primary KM responsibility (independent variable). HR: Agencies where the KM responsibility is assigned to a KM unit have higher KM Practices Index Scores than agencies where KM responsibility is assigned to a different department. The KM Practices Index Score variable was measured on an interval/ratio scale of values ranging from (5-20). The functional area responsibility variable is a discrete categorical variable (Human Resources, Information Technology, KM Unit, Library Services, Executive Management, Grass-roots effort). A one way Analysis of Variance (ANOVA) test was performed to analyze the differences of means between the various groups. Table 14 provides descriptive statistics of the various groups studied. The probability associated with the Levene’s test for equality of variance is 0.907 (Table 15).
86
df2 5
Sig. 80
.907
Because this is more than .05, we can not be reasonably certain that the variance of the KM Index score differs across the different groups (equal variances assumed). The results of the ANOVA test can be found in Table 16. The significance value p=0.028 is lower than the pre-set α of .05, which indicates that we can be reasonably certain that significant differences exist in KM Index Scores between the various groups studied. In order to test our hypothesis, it was necessary to make a more precise comparison between the group “KM Unit” and the other groups. The result of the contrast test is displayed in Table 17. The significance value p=0.038 (one-tailed) is lower than the pre-set α of .05. Thus, H0 is rejected and HR is accepted. We can be reasonably certain that agencies where the KM responsibility is assigned to a KM unit have higher KM Practices Index
An Evaluation of Factors that Influence the Success of Knowledge Management Practices
Table 16. Result of ANOVA test ANOVA KMIndexScore Sum of Squares
df
Mean Square
F
Sig.
Between Groups
113.681
5
22.736
2.659
.028
Within Groups
684.144
80
8.552
Total
797.825
85
Table 17. Contrast Test Results Contrast Tests Contrast KMIndexScore
Value of Contrast
Std. Error
t
df
Sig. (2-tailed)
Assume equal variances
1
-7.3375
4.08316
-1.797
80
.076
Does not assume equal
1
-7.3375
3.79480
-1.934
17.591
.069
Scores than agencies where the KM responsibility is assigned to a different department.
concluSion The conclusions of the study were that the size of the agency does influence the advance of KM Practices within the federal agencies that were the subject of this study. We can be reasonably certain that small agencies have higher KM Practices, as measured by the KM Index Score, than large agencies. There was no previous expectation that agency size would have an effect on the level of the implementation of KM Practices in the research population. The study also found that whether or not the agency is a Cabinet-level Department or an Independent Agency does influence the advance of KM Practices within the agency. Independent agencies have higher KM Practices Index Scores than Cabinet-level Departments. There was no expectation that the type of agency, either Cabinetlevel or Independent Agency, would have an effect on the level of their implementation of KM Practices. The research gives no indication for
this conclusion. This is a new finding that could be explored further. The research data were also analyzed to determine whether agencies where KM Practices were in place for more than 4 years had higher KM Practices Index Scores than agencies where KM Practices had been in place for less than 4 years. The study found that the longevity of KM Practices does influence the advance of KM Practices within agencies. Agencies where KM has been in place for more than 4 years have a higher KM Practices Index Score than agencies where KM Practices have been in place for less than 4 years. This conclusion would appear to bear out the fact that KM implementation matures and continues to expand over time. This is an encouraging finding. The study also found that whether or not there is a commitment to an effective written KM policy or strategy does influence the advance of KM Practices in the agencies included in the study. Agencies with an effective written KM policy or strategy have higher KM Practices Index Scores than agencies without an effective written KM policy or strategy. As in most management disciplines, policy, planning and strategy set the tone for the agency’s commitment to become
87
An Evaluation of Factors that Influence the Success of Knowledge Management Practices
knowledge-centric organizations. This indicates the value that can benefit an organization whose management is committed to setting a policy for the implementation of a knowledge management program. The study found that the location of primary responsibility for KM Practices – i.e., whether responsibility is assigned to a KM unit versus a different department – does influence the advance of KM Practices within the agency. Agencies where the KM responsibility is assigned to a KM unit have a higher KM Practices Index Score than agencies where KM responsibility is assigned to a different department or unit. We can conclude that when a KM unit is created and assigned the responsibility for the implementation of a KM program throughout an agency, that the visibility of this commitment results in a broader number of KM Practices. The study also tested the difference between the KM Practices Index Scores associated with survey respondents who have a KM job title and respondents who have a job title not related to KM. The aim was to determine whether or not respondents with a KM job title provided higher KM Practices Index Scores than respondents with a different job title. The study found that we can not be reasonably certain that respondents with a KM job title provided higher KM Practices Index Scores than respondents with a different job title. Therefore, we can be reasonably certain that responses from KM practitioners relative to their agency’s implementation of KM Practices are not biased. The significance of this research into the implementation of KM Practices in U.S. federal agencies has provided us with a first benchmark view into the demographic characteristics of the 26 agencies that have successfully implemented KM programs. Additional information about the nature of the KM practices implemented were also significant – which practices were implemented most frequently across the responding agencies; the
88
results and benefits from the implementation as indicated by the KM practitioners themselves; and the methods of measurement applied across agencies. Unfortunately, we were unable to present this information within the context of this chapter.
referenceS Almashari, M., Zairi, M., & Alathari, A. (2002). An Empirical Study of the Impact of Knowledge Management on Organizational Performance. Journal of Computer Information Systems, 42(5), 74–82. Barquin, R. C., Bennet, A., & Remez, S. G. (2001). Knowledge Management: The Catalyst for Electronic Government. Vienna, VA: Management Concepts. Commission, 911 (2004). The 9/11 Commission Report: Final Reort of the National Commission on Terrorist Attacks Upon the United States. New York: W.W. Norton & Company Ltd. Commission, 911 (2005). Final Report of the 9/11 Public Discourse Project, December 5, 2005. Retrieved January 12, 2006, from http:// www.9-11pdp.org Dupuy, F. (2000). Why Is It So Difficult to Reform Public Administration? Government of the Future. Paris: OECD Publications. GAO. (2004). GAO Strategic Plan for 2004-2009, Document # GAO-04-5334SP. Retrieved December 20, 2005, from http://www.gao.gov/sp.html GAO. (2005). High-Risk Series, An Update, January, 2005, Document # GAO-05-207. Retrieved February 2, 2005, from http://www.gao.gov/sp George, D., & Mallery, P. (2005). SPSS for Windows Step by Step: A Simple Guide and Reference – 12.0 Update, 5th Edition.
An Evaluation of Factors that Influence the Success of Knowledge Management Practices
Henderson, J., & Venkatraman, N. (1993). Strategic Alignment: Leveraging Information Technology for Transforming Organizations. IBM Systems Journal, 32(2), 4–16. Koh, E. C., Ryan, S., & Prybutok, V. R. (2005). Creating value through managing knowledge in an e-Government to constituency (G2C) environment. Journal of Computer Information Systems, 45(4), 32–42. Lai, H., & Chu, T. (2002). Knowledge Management: A Review of Industrial Cases. Journal of Computer Information Systems, 42(5), 26–39. Liebowitz, J. (2003-2004). A Knowledge Management Strategy for the Jason Organization: A Case Study. Journal of Computer Information Systems, 44(2), 1–5.
OECD. (2003). Knowledge Management: Measuring Knowledge Management in the Business Sector-First Steps. Paris & Ministry of Canada: Center for Educational Research & Statistics Canada (OECD). OPM, Office of Personnel Management. (2004, June). FedScope. Office of Personnel Management. Retrieved December 20, 2005, from http:// www.fedscope.opm.gov/index.asp Sistare, H. (2004). Government Reorganization: Strategies and Tools to Get it Done, 2004 Presidential Transition Series. Washington, DC: IBM Center for The Business of Government. Stenmark, D. (2001). Leveraging Tacit Organizational Knowledge. Journal of Management Information Systems, 17(3), 9–24.
89
Section 2
KM Measurements
91
Chapter 6
Process Model for Knowledge Potential Measurement in SMEs Kerstin Fink University of Innsbruck, Austria
abStract Knowledge measurement is developing into a new research field in the area of knowledge management. To ensure that a company is successful, business, technology, and human elements must be integrated and balanced into a knowledge measurement system. The introduction of a knowledge audit with the objective to uncovering the tacit knowledge in an organization and of identifying the existing management practices is needed. This chapter uses the quantum mechanical thinking as a reference model for the development of a knowledge potential measurement system. This system is influenced by three measurement components: (1) Person-dependent variables, (2) System-dependent variables and (3) knowledge velocity. Based on several case studies conducted in small and medium-sized enterprises, a process model for the implementation of the knowledge potential framework is discussed and introduced. Future research and limitations of the model are discussed in the final part.
Knowledge MeaSureMent introduction In recent years, not only knowledge management, but also primarily the measurement of knowledge (Holsapple, 2008; Jennex, 2007; Skyrme, 1998; Tiwana, 2000) is developing into a new research field. Skyrme (1998) sees the measurement and management of knowledge-based assets as one of DOI: 10.4018/978-1-60566-709-6.ch006
the most important issues for knowledge organizations. As a result, new methods, new methodologies, and new tools have to be developed to measure the knowledge of organizations and of the knowledge workers. A range of quantitative measures - mainly money-based - is available to measure the value of a firm and its intellectual capital. The focus is primarily in the measurement of stocks or flows. Business measurements are the bases for decision making. Defining and measuring the value of a company are key stra-
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
Process Model for Knowledge Potential Measurement in SMEs
tegic concerns in contemporary companies. In the knowledge-economy, the value of the company’s knowledge and its measurements are the key drivers for success. In the knowledge-based economy (Stewart, 1997), the management and the measurement of intangible assets has become one of the most important issues. Historically, business focused on the measurement of tangible assets such as the return on investment, cash flow, and the cost of sales. In the recent years, the focus shifted towards measuring intangible assets such as customer satisfaction and the knowledge of the company personnel. In light of this transition, companies are trying to combine both financial and nonfinancial measurements to achieve optimal organizational well-being. Already in 2000, the OECD (Organization for Economic Co-operation and Development) research area concentrates on the measurement of the knowledge and learning (OECD, 2000). Knowledge measurement systems can help policy makers identify where outcomes fall short of expectations. In the near future, it will be more important to calculate the amount of knowledge in specific sectors and the rate at which knowledge is produced with much more accuracy. The importance of measurement systems for knowledge also is pointed out by Pearson (1990). To ensure that a company is successful, business, technology, and human elements must be integrated and balanced. The key players in a knowledge organization are the experts with their skills and experiences. Amar (2002) points out that experts in knowledge organizations work together not only to achieve the goals of the organizations, but also to achieve the fulfilment of their own goals by using the organization as a vehicle to achieve them. Managers in organizations have to recognize that the uniqueness and creativity of each knowledge worker will lead to customer satisfaction and to the success of the company. Knowledge workers are characterized by a high individuality and by the denial of formal and bureaucratic structures. The major competitive advantage of a knowledge
92
organization is the pool of knowledge workers who find creative and quick problem solutions, hence seven identified characteristics should be taken into consideration (Amar, 2002): •
•
• •
• • •
To connect the doer’s work with the system outcome, end products, or services, and/or with incoming factors, inputs, services, or raw materials; To have professional and social interaction within and outside the organization provided by or through the knowledge work; To perform a variety of knowledge tasks and skills; To know how important and how visible the knowledge worker’s part is in the organization’s scheme of things, project, product, or service to the outcome; To believe others have a high regard for this work; To employ state-of-the-art technology in performance of this work; To provide opportunities for new learning and personal growth.
In general, knowledge measurement approaches can be clustered into two mainstream areas: (1) Cognitive Science and (2) Management Approaches. Cognitive Science deals with the nature of intelligence, and it rests on empirical studies that describe the performance of human subjects in cognitive tasks. Another way to structure cognitive science is to understand that field more deeply and to know the disciplines that contributed to its foundation. Simon and Kaplan (Simon & Kaplan, 1989) identify six disciplines which determine the field: philosophy, psychology, neurosciences, artificial intelligence, language, and cognition. These six fields correspond to The MIT Encyclopedia of the Cognitive Sciences (Wilson & Keil, 1999) that constitutes the foundation on the cognitive sciences. The Massachusetts Institute of Technology (MIT) clustering of the cognitive sciences into the fields of philosophy, psychol-
Process Model for Knowledge Potential Measurement in SMEs
ogy, neurosciences, computational intelligence, linguistics and language, and culture, cognition and evolution establish the basic framework for discussing the cognitive science approach because it is one of the most detailed approaches. Over the years, research into measuring the value of company intangible assets or intellectual capital (IC) has produced many methods and theories (Management Approach). Figure 1 illustrates a measurement matrix given an overview of management measurement approaches. The author uses the classification schema from Sveiby (Sveiby, 1997) as a basic framework, and is adapting it to the quantum mechanical thinking dimension which is the key research focus of this chapter. The X-Axis represents the content of measurement (money-based measurement, no moneybased measurement, quantum performance measurement). The Y-Axis symbolizes the object of measurement, such as unit/process, organizational level and finally individual (knowledge worker) level. The 9x9-Knowledge Performance Matrix visualizes the different measurement methods and techniques. Knowledge measuring solutions can accelerate decision making processes, help enhance the speed of the business process, and deliver a decisive competitive advantage, which can be clustered into five commonly known knowledge management categories: 1.
2.
3.
Direct Intellectual Capital Methods (DIC). The DIC methods estimate the $-value of intangible assets by identifying various components. Once the components are identified, they can be evaluated directly either individually or as an aggregated coefficient. Market Capitalization Methods (MCM). The MCM calculate the difference between a company’s market capitalization and its stockholders equity as the value of its intellectual capital or intangible assets. Return on Assets Methods (ROA). The ROA methods divide the average pre-tax earnings
Figure 1. Knowledge measurement matrix
4.
5.
of an organization for a period of time by the average tangible assets of the company. The result is the ROA of a company, and it is compared with the industry average. The difference is multiplied by the organization’s average tangible assets to compute the average annual earnings from the organization’s intellectual capital. Dividing the intellectual capital earnings by the company’s average cost of capital or by a reference interest rate results in an estimate of the value of an organization’s intellectual capital. Scorecard Methods (SC). SC methods identify different components of intellectual capital and corresponding indicators which are generated in scorecards or graphs. Knowledge Potential Measurement Method uses quantum mechanical thinking to evaluate the intangible assets asscociated with knowledge work.
The research focus combining the individual level, representing the knowledge worker perspective, with the quantum mechanical thinking integrating the uncertainty view of knowledge is
93
Process Model for Knowledge Potential Measurement in SMEs
the objective of this chapter. The research of this chapter focuses on the measurement process of the highly uncertain knowledge of the experts who have a high importance to the organization. This field is very critical for the future success of organizations because it contains the Knowledge Potential (Fink, 2004) of the organization and its knowledge workers. The term knowledge potential refers to the skills and experience each knowledge worker possesses based on the learning process to transform them into an excellent employee. The knowledge potential is about identifying, networking, and implementing the tacit knowledge of the experts quickly to achieve the company’s strategic objectives. The knowledge potential of a knowledge worker covers customer capital, networking and communication skills, competitor information, content and culture knowledge, constant learning and training processes, information about knowledge management systems, information about the organizational knowledge structure, and the evaluation of the tacit knowledge of the experts. The management has to guide knowledge workers to make their knowledge potential transparent for the organization. Therefore, the objective of this chapter is to introduce a knowledge measurement system which enables each organization to make statements about the knowledge potential of each knowledge worker. Influenced by the circumstance that an uncertain character distinguishes the term knowledge, each organization has to find a measurement system to evaluate the knowledge potential of its highly important experts.
quantuM MecHanical tHinKing and Knowledge The concept of quantum mechanical thinking and the Uncertainty Principle of Heisenberg are the basic frameworks for the derivation of the Knowledge Potential Measurement Model.
94
quantum organization There is an aspect of uncertainty (Pearl, 1990) associated with knowledge management and measurement. Looking at our daily reasoning processes, most of the decisions are based on uncertain premises, meaning that most of the action relies on guesses. In general, it has to be accepted the fact that uncertainty is a fact of life. Nature shows that uncertainty exists from quantum to cosmological scales. Complex systems, such as the ecosystem, the economy, society, and climate border chaos and order where the Nature is very creative. In the knowledge management environment complexity and uncertainty are combined forces influencing the system and making it difficult to predict an outcome. Uncertainty is responsible for the fact that the more a system gets complex, the less precise statements can be made. Kilmann (2001) is introducing the quantum organization as a new paradigm to manage organizational transformation in a world which is highly interconnected and where success depends whether the participants progress towards self-aware consciousness. This means, that the process of transformation in organizations requires that individuals develop a self-aware consciousness. The transformation for organizations has to be seen in the light of the shift from the old paradigm which Kilmann (2001) calls “Cartesian-Newtonian Paradigm”, to the new paradigm, the “Quantum-Relativistic Paradigm”. The traditional old paradigm separates people from an outside, objective material universe. This worldview is influenced by the separation of consciousness and matter. The physical world exists on its own, and it is unaffected by human beings. This means, that the human mind has no effect on the nature of the physical reality. The old paradigm is underlying a deterministic certainty in the sense that objects are inert and only moved by external forces. Objects can be compared with a billiard ball for which position and momentum can be determined simultaneously and precisely.
Process Model for Knowledge Potential Measurement in SMEs
The changing paradigm is influenced by the relevance of quantum mechanical thinking. The key question for Kilmann is why is it possible to apply quantum-based principles to mediumsized objects such as people and organizations (Kilmann, 2001). One reason for choosing the “Quantum-Relativistic Paradigm” is to look at the self-motion of particles and people. For Kilmann both particles and people can be seen as monads because they are free to choose their direction and motion by themselves and because they do not need external forces to move them. Nuclear particles are similar to human beings. They have the freedom to go anywhere and even to transform themselves into a variety of other forms. This process causes uncertainty in the sense of the Uncertainty Principle of Heisenberg (Gribbin, 1999). In quantum physics, position and momentum uncertainty are the archetypal example discovered by Werner Heisenberg. This principle means that no entity can have both precisely determined momentum and precisely determined position at the same time. Photons and people are at self-motion. A second explanation for the new paradigm is the nature of the human brain which is subdivided into two halves, the left and the right hemispheres. While the left brain is associated with more logical thinking, the right brain is responsible for processes that enable a person to recognize whole images. Zohar discusses the nature of the human being from a quantum thinking perspective (Zohar, 1997): “The essence of quantum thinking is that it is the thinking of precedes categories, structures, and accepted patterns of thought, or mind-sets. It is with quantum thinking that we create our categories, change our structures, and transform our patterns of thought. Quantum thinking is vital to creative thinking and leadership in organizations. It is the key to any genuine organizational transformation. It is the key to shifting our paradigm. Quantum thinking can link between the brain’s creativity, organizational transformation and leadership, and the ideas found in the new science.” This new way of working also demands a different kind of
organizational structure and a new view of dealing with employees. In the new paradigm, the knowledge of employees, the skills and experiences of experts gained over a long period of learning and communicating with other people stand at the center of consideration. The basic assumption of the new paradigm is that the problem solving process of an individual is directed by his inner knowledge and experience. Kilmann uses the term “quantum organization” as opposed to Newtonian organization. The term “quantum organization” is used synonymous with networked organization or knowledge-creating organization. A “quantum organization” is characterized by a set of seven categories (Kilmann, 2001): 1.
2.
3.
4.
The Inclusion of Consciousness in SelfDesigning Systems. This means that each employee has knowledge, skills, and experience to influence the design of the organizational system. It is a proactive approach which also includes the knowledge of stakeholders such as customers, competitors, suppliers and other partners. Each individual is contributing creativity and knowledge-in-action to solve problems. Organizations as Conscious Participants Actively Involved in Self-Designing Processes. This dimension of quantum organizations implies that each employee tries to design value-added processes throughout the organization. Participants should reflect on their processes and built new knowledge which is applied to add value to the organization. Cross-Boundary Processes as Explicitly Addressed and Infused with Information. In a quantum organization, its members are encouraged to exchange knowledge with other partners across the organizational boundary. The Conscious Self-Management of a Flexibly Designed Organization. In contrast to the Newtonian organization, the subunits
95
Process Model for Knowledge Potential Measurement in SMEs
5.
6.
7.
in the quantum organization are responsible for the self-management of all different kinds of tasks such as hiring, training, recruiting, educating and learning. The knowledge workers are individuals who have the freedom to self-design and self-manage daily work in order to develop creative solutions to customer problems. The Internal Commitment of Active Participants. In a quantum organization, employees are committed to discover new knowledge and to build new knowledge and to refresh the existing experiences in educational programs. The knowledge worker is responsible for seeking new opportunities for constant improvement of his own knowledge and, by this, improvement for the whole organization. The Empowered Relations Among Active Participants. The high-level professionals in a quantum organization exchange their skills and experiences with other knowledge workers within or even outside the organization. A cross-boundary connection and communication with other participants helps to foster and exchange knowledge across national boundaries and to gain and improve the existing knowledge base. The Eternal Self-Transformation of Flexibly Designed Organizations. Finally, a quantum organization has to nourish the trust, commitment and creativity gained in the past and transform it into present and future activities. The transformation only will be successful if the knowledge of joint ventures, mergers and acquisitions, and global networks will be used for a creative problem solving process. The useful knowledge gain will build an organization that can rely on the experience and skills of its knowledge workers.
If organizations are transforming into the new paradigm, they need a corresponding quantum infrastructure which enables their employees to
96
use self-awareness and self-motion skills, for teambuilding, structure, strategy, process, and culture. The experts with their skills have to have a cultural environment which is not built on a standard operating system like the Newtonian organization, but rather one built on a system of complex problem solving which often requires not only the expertise of one professional but also the sharing of knowledge with diverse experts through networking and communication.
uncertainty Principle In 1927 Heisenberg articulated the so called Heisenberg Uncertainty Principle or Indeterminacy Principle (Green, 2000; Wick, 1995). According to the Uncertainty Principle, the position and the velocity of an object cannot both be measured exactly at the same time. Any attempt to measure the velocity of a subatomic particle, such as an electron, precisely is unpredictable, so a simultaneous measurement of its position has no validity. This result has nothing to do with inadequacies in the measuring instruments, the technique, or the observer; it arises from the intimate connection in nature between particles and waves in the subatomic realm. There are four properties which are important to the Uncertainty Principle: the position of the electron, its momentum (which is the electron’s mass times its velocity), its energy, and the time. These properties appear as “variables” in equations that describe the electron’s motion. Uncertainty relationships have to do with the measurement of these four properties; in particular, they have to do with the precision with which these properties can be measured. Until the advent of quantum mechanics, everyone thought the precision of any measurement was limited only by the accuracy of the measurement instruments used. Heisenberg showed that regardless of the accuracy of the instruments used, quantum mechanics limits precision when two properties are measured at the same time. These are not just any two prop-
Process Model for Knowledge Potential Measurement in SMEs
erties; they are the two represented by variables that have a special relationship in the equations. The uncertainty relationship can be written more precisely by using mathematical symbols. First, the basic symbols have to be defined: • • • • • •
Δ x is the uncertainty in the position measurement; Δ p is the uncertainty in the momentum measurement; Δ E is the uncertainty in the energy measurement; Δ t is the uncertainty in the time measurement; h is a constant known from quantum theory known as Planck’s constant; π is pi.
Putting these symbols together, the two uncertainty relationships look like the following (Gerjuoy, 1993): ∆p∆x ≥
h 4π
Knowledge Potential MeaSureMent fraMeworK basic concept and assumptions
and ∆E ∆t ≥
h 4π
Assume that it is possible to measure the position of a moving electron with such great accuracy that Δx is very small. What happens to the precision of momentum Δp, measured at the same instant? From the first relationship, the following formula can be calculated: ∆p ≥
the precision of the position measurement gets so great that the uncertainty Δx gets so small that it approaches zero, then Δp gets so large that it approaches infinity or it becomes completely undefined. The uncertainty relationship for energy is stated as giving an estimate for ΔE in the energy that is found when measuring it in an experiment lasting at most a time Δt. Considering the two equations above, a quite accurate measurement of one variable involves a relatively large uncertainty in the measurement of the other. Quantum theory measurement is needed in a microscopic world because the measurement interaction disturbs the object. The classical theory of measurement is adequate for the macroscopic world because the measurement interaction does not significantly disturb the object. The Uncertainty Principle of Heisenberg is the theoretical framework for the derivation of the knowledge potential measurement framework.
h ∆x 4π
It can be seen that the uncertainty in the momentum measurement, Δp, is very large because Δx in the denominator is very small. In fact, if
The Uncertainty Principle of Heisenberg is the basic theoretical framework for the knowledge potential measurement procedure. However, it is not possible to take the equations from Heisenberg and to transfer them to the knowledge potential approach without changes. The Uncertainty Principle functions as a reference-model for the knowledge approach because it is not a physical environment. The linking relationship is that uncertainty is the major characteristic of knowledge as well as a physic phenomenon. The measurement procedure of the knowledge worker is greatly influenced by uncertain decisions. The match between the two ideas is the concept of uncertainty. During the analogical mapping process, moreover, the existing structure of the Uncertainty Principle is imported into the new knowledge measurement approach.
97
Process Model for Knowledge Potential Measurement in SMEs
The problem of measurement in the knowledge approach arises from the fact that several principles of the quantum world appear to be in conflict with the knowledge approach and that, in contrast to quantum measurement, the knowledge approach is influenced by the individual abilities of a person. This is the reason why knowledge management has three basic assumptions for the uncertainty measurement: 1.
2.
3.
The constant h will not be used; it is substituted for the knowledge potential of a knowledge worker. The measurement procedure is not a physical one. Thus, characteristics of a human resource based approach must be taken into consideration. A basic physical definition has to be re-interpreted and put into a knowledge context by using the analogical reasoning process. The theoretical implication of Heisenberg’s measurement is that the more precisely the position is determined, the less precisely the momentum is known in this instant, and vice versa. This physical phenomenon is difficult to explain for the knowledge measurement process, hence, the conducted case studies show first implications of this phenomenon.
It must be pointed out that the procedure of measurement in the case of knowledge is not a traditional one in the sense of a physical measurement. Rather it is a measurement in psychology. Michell (Michell, 1999) points out that there is a difference in measuring in the natural sciences and in a psychological context. In the natural sciences numbers are not assigned to anything; they are measured like they are in physics. The difference is that numbers are assigned to objects in psychological measurement. In the knowledge potential measurement procedure, each knowledge worker is assigned a number, his personal knowledge potential.
98
Knowledge Potential Measurement Process Figure 2 illustrates the Knowledge Potential Measurement Model (Fink, 2004). The two key measured properties are: • •
Knowledge momentum (person-dependent variables); and Knowledge position (system-dependent variables).
Knowledge mass is a quantity representing the amount of expertise, skills, and experience of a knowledge worker. Mass is characterized by an individual dimension, and it is dependent on the personal ability of a knowledge worker to deal with his experience. Mass can be defined as a person-dependent variable. The amount of the mass is influenced by variables which depend on the experience of a knowledge worker. The variables concerning individual knowledge acquisition, transfer, and learning experience are assigned to the mass index. The knowledge mass is the sum of the four dimensions of the knowledge potential view: content, networking, skills, and learning information. Therefore, knowledge mass can be defined as follows: Knowledge Mass = {Content, Networking, Skills, Learning}. The term knowledge mass is defined as the sum of person-dependent variables that influence the knowledge potential of an expert and is measured by four variables: Content, Networking, Personal Skills and Learning Environment. Velocity is an expression for the displacement an object or particle undergoes with respect to time. Time is an observed phenomenon by means of which human beings sense and record changes in the environment and in the universe. Velocity then, is the time rate of change of position of a body
Process Model for Knowledge Potential Measurement in SMEs
Figure 2. Knowledge potential measurement model
in a particular direction. So velocity is directionoriented. When evaluating the velocity of an object, one must monitor the direction. To determine the velocity of an object, one would need to know the speed and direction of the object. In terms of the knowledge potential view, knowledge velocity is the accomplishment of problem solving objectives. How good or bad has a knowledge worker reached his objectives within a certain time? This means, if a person has to solve a problem, quality of the problem solution and length of time to solve a problem are important facts. The quality of the solution process is relevant, but the time it took to solve the problem also is relevant. Velocity is an expression of the quality of the problem solution process for a knowledge worker in a certain time. It is not enough that a knowledge worker solves a problem quickly, but you also need to know the direction of the solution process. Velocity measures the degree to which any knowledge contributes to knowledge potential. An optimal velocity performance is knowledge with a high degree of contribution towards the improvement of the knowledge potential. Tiwana (Tiwana, 2000) uses the term “knowledge velocity” that successful
companies must develop to overcome knowledge sluggishness and to gain competitive advantages. Tiwana notes that knowledge sluggishness is to learn from failures and their analyses to prevent the repetition of past mistakes. It is necessary to integrate a knowledge velocity into the knowledge processes of a company’s business processes. Knowledge velocity should allow people to learn from past decisions and to apply this experience to new complex choices and to future decisions. Davenport and Prusak (1998) point out that a successful and efficient knowledge transfer is influenced by the velocity of the transfer, “the speed with which knowledge moves through an organization. How quickly and widely is it disseminated? How quickly do the people who need the knowledge become aware of it and get access to it? In a company, there must be a general acceptance for rapid decision-making and the quick application of knowledge to the development of new products and services. Velocity is an important variable for one to understand how efficiently a company is using its knowledge capital. The term “knowledge velocity” in the knowledge potential view can be defined as follows:
99
Process Model for Knowledge Potential Measurement in SMEs
Knowledge Velocity = Degree of quality that a knowledge worker uses to solve a problem with respect to the time dimension. Thus, how rapidly can customers receive high quality problem solutions? Knowledge velocity is the implementation speed for good solutions and the quality of the solution to the problem. Knowledge velocity is directed towards the high quality solution of customer problems. Knowledge velocity is the accomplishment of problem solving objectives. Finally, the term knowledge position has to be described. The questions behind the knowledge position measurement are: In what environment is a person applying his knowledge? Where does the action take place? What factors influence the application of knowledge? The position variable is system-dependent. The knowledge worker cannot directly influence the position dimensions because they are not dependent on his personal behavior. Knowledge Position = {Culture, Organizational Knowledge, Competitors, Customers, I&CS/KMS} The term knowledge position covers all systemdependent variables that influence the creation of the knowledge potential of the knowledge worker and is influenced by four variables: Culture, Organizational knowledge, Competitor Knowledge, Customer Knowledge and Information/Knowledge Management Systems. Knowledge position covers all of the influencing variables for knowledge potential, variables that cannot be manipulated and changed directly by the knowledge worker. The action-knowledge is embedded in outputs such as products or services demanded by a customer. The definition of the two variable groups depends on the company specific structure and on industry-dependent influencing factors. These properties appear as variables in an equation that describes the knowledge potential of a knowledge
100
worker. Each knowledge worker has answered questions concerning the nine dimensions which cover the different knowledge management aspects. These nine dimensions cover the major influencing knowledge fields. The number of dimension also can be enlarged to more than nine or reduced to less than nine dimensions. These specifications depend on the industry and on the specific company setting. Similar to the Jennox and Olfman (Jennex & Olfmann, 2007; Jennex, Smolnik & Croasdell 2008) Knowledge Management Success Model it was derived from several US and European case studies and expert interviews conducted by the author. These case studies took place during the time period between 2004 to 2008 and were concentrated on the ITindustry sector. In addition, videotaping (Herschel & Yermish, 2008) and analysis is applied to get more insight into the knowledge sharing process. After the calculation of the knowledge potential, the value is clustered into five knowledgerating intervals. These five stages are based on the skill acquisition model by Dreyfus and Dreyfus (1997). The model of skill acquisition has the basic idea that those persons who master specific skills must pass through five levels of learning: (1) Novice, (2) advanced beginner, (3) competency, (4) proficiency and (5) expertise (knowledge worker). Finally, this skill acquisition position determines future actions for SMEs to implement knowledge processes and knowledge management systems. The Knowledge Potential Measurement Method is related to the field of Business Performance as well as Human Resource Performance Measurement approaches (Kavanagh & Thite, 2009; Neely, 2007; Spitzer, 2007). In the future it will be necessary to align measures of human resourses to the strategic mission of an organization in order to optimize the planning of human capital. From a strategic point of view, performance measurement is a central part of gaining competitive advantages. Spitzer (2007) states that performance measurement needs integration into the daily working process in a positive way
Process Model for Knowledge Potential Measurement in SMEs
Figure 3. Knowledge potential process model for SMEs
of thinking. In order to make measurement more acceptable it has to be socialized in teams, units and the overall organizations.
Knowledge Potential Measurement Process Model for SMes Fink (2004) applied the Knowledge Potential Measurement Method to large organizations, primarily in the ERP vendor segment. The experiences of the conducted case studies showed that the process of measurement is cost and time consuming. Since 2006, the measurement method was therefore applied to small and medium sized organizations (SMEs) in the US and in European countries (Fink & Ploder, 2008). Historically, knowledge management focused on the domain of larger organizations. Consequently issues of culture, networking, organizational structure and technological infrastructure have been examined upon the implementation of knowledge management initiatives in large multi-national organizations and seem to give little relevance (Corso, Martini, Paolucci, & Pellegrini, 2003; Delahaye, 2003; Wong, 2005) to small and medium enterprises (SMEs). However, the success and growth of SMEs depends on how well they manage the knowledge of their knowledge work-
ers. Managers in SMEs have to recognize that the uniqueness and creativity of each knowledge worker will lead to customer satisfaction and the success of the SMEs. The view of knowledge management for SMEs is currently discussed by Fink and Ploder (Fink & Ploder, 2006; Fink & Ploder, 2007; Fink & Ploder, 2008) who state that SMEs need a knowledge process model that can be applied in a time-saving and cost-saving manner. Figure 3 illustrates the basic framework for SMEs to integrate knowledge measurement into their knowledge management concept. The SME layer defines the local and national specifications for a SME in order to participate on knowledge management. During this process it is necessary to deal with a clear focused strategy, a good leadership practice and the social context for successful knowledge management implementation. After this process, for SMEs it is necessary to concentrate on the key knowledge processes of their organization (Fink & Ploder, 2008). Besides complex knowledge management systems, Fink and Ploder (Fink & Ploder, 2007) found out that only for key knowledge processes for SMEs are relevant for modeling (knowledge process layer). However, after the definition of the key knowledge processes, the knowledge potential measurement process needs to be inte-
101
Process Model for Knowledge Potential Measurement in SMEs
Figure 4. Balanced system for knowledge potential method
grated into the modeling process. Once the knowledge workers have been identified, they their personal knowledge potential value can be measured by applying the knowledge potential method. This means, that the knowledge worker stand in the center of consideration and the knowledge velocity, knowledge position and knowledge momentum are measurement. The result should be a balanced system. Figure 4 illustrates a balanced system for the knowledge potential measurement process indicating that a balance can only be achieved by a positive application of the knowledge velocity. This means, only knowledge workers who are able to apply their know-how in a timely manner can gain competitive advantages for SMEs. The Performance Layer is an indicator system that measures the success of the knowledge process and enables SMEs to improve the knowledge potential of their experts in order to gain competitive advantages.
liMitationS and future reSearcH The limitations of any measured value involve uncertainty, thus uncertainty should be taken into consideration for any calculated value. Ronen (1988) states, that from a physical viewpoint, all measured values are approximate and any degree of accuracy is subject to the limitations of the Heisenberg Uncertainty Principle in the
102
sense that uncertainty is inherent in physical processes. Ronen makes the point that the accuracy of physical parameters improves over time, and it reduces uncertainties. However, this basic idea from physics also can be transferred to the knowledge potential view, meaning that the measurement uncertainty can be reduced over time. In our everyday life, decisions are made based on the calculation of numerical values. The quality of the decision is highly dependent on the size of the error associated with those values. A single measurement can be the basis for further actions concerning our safety, health, and environment or, in our case, the company’s knowledge value. Therefore, it is important to keep the uncertainties of such measurements small enough that any actions based on the measurement are negligibly affected. The measurement of the nine variables done by the knowledge-engineer includes an uncertainty which affects the system of measurement and, hence, the outcome of that system. This means the calculated knowledge potential for each knowledge worker is associated with an uncertainty based on the measurement error of the interviewer. However, the idea exists in our mind that we measure correct values. Gupta (1992) argues that every human is confronted with uncertainties arising from our thinking, cognition, and perception processes. In our learning process, every human is collecting experiences from which useful information is extracted from the uncertainties in our environment, and this information is used for further actions and decision making processes. Current and future research deals with the further specification to the measurement process to the SME characteristics. The process of measurement itself has to be simplified in order to overcome knowledge measurement barriers (Fink, 2009). So far, expert interviews have been conducted with each knowledge worker in order to measure the value in the nine dimensions. Currently a software prototype is under development to standardize certain questions and to improve
Process Model for Knowledge Potential Measurement in SMEs
the result process. The key measurement process is still conducted with expert interviews and videotaping.
referenceS Amar, A. (2002). Managing Knowledge Workers. Westport: Quorum Books. Corso, M., Martini, A., Paolucci, E., & Pellegrini, L. (2003). Knowledge management configurations in Italian small-to-medium enterprises. Integrated Manufacturing Systems, 14(1), 46–56. doi:10.1108/09576060310453344
Fink, K., & Ploder, C. (2008). Integration Concept for Knowledge Processes, Methods & Software for SMEs. In Gupta, J., Sharma, S., & Rashid, M. (Eds.), Encyclopedia of Enterprise Systems. Hershey, PA: IGI Global. Gerjuoy, E. (1993). Uncertainty Principle. In Parker, S. (Ed.), Encyclopedia of Physics (pp. 1490–1491). New York: McGraw-Hill. Green, H. (2000). Information Theory and Quantum Physics. Berlin: Springer Publisher. Gribbin, J. (1999). Q is for Quantum. New York: A Touchstone Book.
Davenport, T., & Prusak, L. (1998). Working knowledge: how organizations manage what they know. Boston: Harvard Business School Press.
Gupta, M. (1992). Intelligence, Uncertainty and Information. In Ayyub, B., Gupta, M., & Kanal, L. (Eds.), Analysis and Management of Uncertainty. Amsterdam: North-Holland.
Delahaye, D. (2003). Knowledge Management in a SME. International Journal of Organisational Behaviour, 9(3), 604–614.
Herschel, R., & Yermish, I. (2008). Knowledge Transfer: Revisiting Video. International Journal of Knowledge Management, 4(2).
Dreyfus, H., & Dreyfus, S. (1997). Why Computers May Never Think Like People. In Ruggles, R. (Ed.), Knowledge Management Tools (pp. 31–50). Boston: Butterworth-Heinemann.
Holsapple, C., & Wu, J. (2008). Does Knowledge Management Pay Off? Paper presented at the HICSS-41.
Fink, K. (2009). Knowledge Measurement Barriers. Paper presented at the 15th American Conference on Information Systems, San Francisco. Fink, K., & Ploder, C. (2006). The Impact of Knowledge Process Modeling on Small and Medium-sized Enterprises. In K. Tochtermann & H. Maurer (Eds.), Proceedings of I-KNOW ‘06: 6th International Conference on Knowledge Management (pp. 47-51). Graz: J.UCS. Fink, K., & Ploder, C. (2007). Knowledge Process Modeling in SME and Cost-Efficient Software Support: Theoretical Framework and Empirical Studies. In Khosrow-Pour, M. (Ed.), Managing Worldwide Operations and Communications with Information Technology. Hershey, PA: IGI Global.
Jennex (Ed.). (2007). Knowledge Management in Modern Organizations. Hershey, PA: Idea Group Publishing. Kavanagh, M., & Thite, M. (2009). Human Resource Information Systems. Los Angeles: Sage P. Kilmann, R. (2001). Quantum Organization. Palo Alto, CA: Davies-Black Publishing. Michell, J. (1999). Measurement in Psychology. Cambridge, UK: Cambridge University Press. doi:10.1017/CBO9780511490040 Neely, A. (2007). Business Performance Measurement. Cambridge, UK: Cambridge University Press. doi:10.1017/CBO9780511488481 OECD. (2000). Knowledge Management in the Learning Society. France: OECD Publisher.
103
Process Model for Knowledge Potential Measurement in SMEs
Pearl, J. (1990). Bayesian Decision Methods. In Shafer, G., & Pearl, J. (Eds.), Readings in Uncertain Reasoning (pp. 345–352). San Francisco: Morgan Kaufman Publishers. Pearson, T. (1990). Measurement and the Knowledge Revolution. In Cortada, J., & Woods, J. (Eds.), The Knowledge Management Yearbook. Boston: Butterworth-Heinemann. Ronen, Y. (1988). The Role of Uncertainties. In Ronen, Y. (Ed.), Uncertainty Analysis (pp. 2–39). Boca Raton, FL: CRC Press. Simon, H., & Kaplan, C. (1989). Foundations in Cognitive Science. Cambridge, MA: The MIT Press. Skyrme (1998). Measuring the Value of Knowledge. Metrics for the Knowledge-Based Business. London: Business Intelligence. Spitzer, D. (2007). Transforming Performance Measurement. New York: American Management Association. Stewart, T. (1997). Intellectual Capital. The New Wealth of Organizations. New York: Doubleday Currency. Sveiby, K. (1997). The New Organizational Wealth: Managing and Measuring KnowledgeBased Assets. San Francisco: Berrett-Koehler. Tiwana, A. (2000). The Knowledge Management Toolkit: Practical Techniques for Building Knowledge Management System. Upper Saddle River, NJ: Prentice Hall. Wick, D. (1995). The Infamous Boundary: Seven Decades of Controversy in Quantum Physics. Boston: Birkhäuser. Wilson, R., & Keil, F. (1999). The MIT Encyclopedia of the Cognitive Sciences. Cambridge: The MIT Press.
104
Wong, K. (2005). Critical success factors for implementing knowledge management in small and medium enterprises. Industrial Management & Data Systems, 105(3), 261–279. doi:10.1108/02635570510590101 Zohar, D. (1997). Rewiring the Corporate Brain. Using the New Science to Rethink How We Structure and Lead Organizations. San Francisco: Berrett-Koehler.
KeY terMS and definitionS Knowledge Mass: The sum of person-dependent variables that influence the knowledge potential of an expert and can be measured by variables such as content, networking, personal skills and learning environment. Knowledge position: All system-dependent variables that influence the creation of the knowledge potential of the knowledge worker and is influenced by variables such as culture, organizational knowledge, competitor knowledge, customer knowledge and Information/Knowledge Management Systems. Knowledge Potential Measurement: An expert oriented measurement process that puts the employee in the center of consideration. The main focus lies in the ability to capture the variety of influencing factors in a working environement. Knowledge Velocity: The degree of quality that a knowledge worker uses to solve a problem with respect to the time dimension. Thus, how rapidly can customers receive high quality problem solutions? Knowledge velocity is the implementation speed for good solutions and the quality of the solution to the problem. Knowledge velocity is directed towards the high quality solution of customer problems. Knowledge velocity is the accomplishment of problem solving objectives.
Process Model for Knowledge Potential Measurement in SMEs
Knowledge Worker: A person who has the ability to solve complex problems by using the experiences and skills gained in a long learning process. Key players in an organization are the experts with their skills and experiences. Quantum Organization: A new paradigm to manage organizational transformation in a world which is highly interconnected and where success depends wether the participant progresses towards self-aware consciousness. This means, that the process of transformation in an organization requires that individuals develop a self-aware consciousness.
Small and Medium-Sized Enterprises (SME): Enterprises which employ fewer than 250 persons and which have an annual turnover not exceeding EURO 50 million, and/or an annual balance sheet total not exceeding EURO 43 million. Within the SME category, a small enterprise is defined as an enterprise which employs fewer than 50 persons and whose annual turnover and/ or annual balance sheet total does not exceed EURO 10 million. Within the SME category, a micro enterprise is defined as an enterprise which employs fewer than 10 persons and whose annual turnover and/or annual balance sheet total does not exceed EURO 2 million.
105
106
Chapter 7
Developing Individual Level Outcome Measures in the Context of Knowledge Management Success1 Shahnawaz Muhammed American University of Middle East, Kuwait William J. Doll The University of Toledo, USA Xiaodong Deng Oakland University, USA
abStract Success of organizational level knowledge management initiatives depends on how effectively individuals implementing these initiatives use their knowledge to bring about outcomes that add value in their work. To facilitate assessment of individual level outcomes in the knowledge management context, this research provides a model of interrelationships among individual level knowledge management success measures which include conceptual knowledge, contextual knowledge, operational knowledge, innovation, and performance. The model was tested using structural equation modeling based on data collected from managerial and professional knowledge workers. The results suggest that conceptual knowledge enhances operational and contextual knowledge. Contextual knowledge improves operational knowledge and is also a key predictor of innovations. The innovativeness of an individual’s work along with operational knowledge enhances work performance. The results support the proposed model. This model can potentially be used for measuring knowledge management success at the individual level. DOI: 10.4018/978-1-60566-709-6.ch007
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
Developing Individual Level Outcome Measures in the Context of Knowledge Management Success
introduction At an organizational level, an important aspect of knowledge management (KM) success is to have systems and processes that enable getting the right information to the right person at the right time (Jennex, Smolnik & Croasdell, 2007). How these systems and processes impact an individual’s knowledge and the subsequent work outcomes are an equally important aspect, if not of greater significance. One of the most important objectives of various organizational systems and processes is to empower individuals to take informed actions that will create value for the organization. A pragmatic view of individual knowledge and how it is related to other performance outcomes is lacking in the literature. This research addresses this gap and empirically tests a model of individual task knowledge and its relationship with other individual level outcomes in the context of KM success. A significant amount of KM success literature focuses mainly on systems and processes; often with little emphasis on the individual who use these systems to solve problems and create value (Guo & Sheffield, 2006). Knowledge is often viewed as an organizational resource that has to be managed well in order to gain competitive advantage. Such an organizational view of knowledge is comparable to the resource based view of the firm (Grant, 1996; Grover & Davenport, 2001). From the organizational view of knowledge, specific processes and systems including information systems (IS) are used to manage this organizational resource. These processes and systems often form the key elements of most organizational level KM initiatives. In the KM literature, knowledge is seldom studied as an individual resource that improves the individual’s productivity and innovation. Individual’s productivity and innovation can contribute to organizational success. Several researchers acknowledge the importance of individual knowledge in the implementation and success of organizational level KM
initiatives (Grant, 1996; Grover & Davenport, 2001). The task-related knowledge of individuals can be considered as a critical component of how individuals act to create value for organizations. This task knowledge reflects the individuals’ knowledge related to their work. The task knowledge accumulates over time and may include the learning that takes place within the organizational context (Kim, 1993; Nonaka & Takeuchi, 1995). The lack of a broader understanding of KM and its outcomes at the individual level can potentially hamper the overall research efforts in this field (Guo & Sheffield, 2006). Acknowledging that there are different types of task knowledge, we contend that enhanced task knowledge should be one of the primary outcomes of individual knowledge management. We further explore the various dimensions of task knowledge, their interrelationships, and relationships with other relevant individual outcomes in an organizational context such as individual performance and innovation. Within a broader context of KM success at the organizational level, we focus on the individual level to examine the task knowledge and performance outcomes and their relationships. Organizational and individual factors that contribute to the individual KM outcomes are discussed. Specifically, we focus on (1) developing measures of task knowledge, which includes conceptual knowledge, contextual knowledge, and operational knowledge, (2) exploring the relationships among the three types of knowledge, and (3) relating the various dimensions of individual task knowledge to the innovation and performance of individual knowledge workers.
Knowledge ManageMent SucceSS KM success is viewed from many different perspectives in the KM literature. From an IS perspective, KM success is often equated with knowledge management system (KMS) success.
107
Developing Individual Level Outcome Measures in the Context of Knowledge Management Success
Figure 1. A model of knowledge management success at the individual level
Those who adopt this perspective have often used the DeLone and McLean (1992, 2003) IS success model to model KMS success (Jennex & Olfman, 2006; Kulkarni, Ravindran & Freeze, 2006; Wu & Wang, 2006). Ambiguity exists among researchers in equating KMS success with KM success (Jennex et al., 2007). Jennex et al. (2007) details the concept of KM success from many widely adopted perspectives in the KM literature and provides an integrated definition of KM success as “...capturing the right knowledge, getting the right knowledge to the right user, and using this knowledge to improve organizational and/or individual performance. KM success is measured using the dimensions of impact on business process, strategy, leadership, efficiency and effectiveness of KM processes, efficiency and effectiveness of the KM system, organizational culture, and knowledge content.” (p. 6). This is a much broader definition and captures many of the essential elements that embody such an important concept in KM. Given the multi-faceted nature of KM success as defined above, it might be a challenge to capture the entire concept in one instrument development effort. Further, various aspects highlighted in this definition might be more applicable in different contexts. In another study that summarized explicit and implied KM success measures in the literature, Anantatmula and Kanungo (2006) identified a host of KM outcomes ranging from
108
better employee skills to increased share price. They classified these outcomes into five categories consisting of employee performance, organizational performance, business performance, market performance and intellectual capital. In this study, different respondents rated different outcomes as most important for them. This suggests that the appropriateness of using a specific outcome measure may be context dependent. A key aspect of Jennex et al.’s (2007) definition is that the knowledge captured, distributed, and used should improve organizational and/or individual performance depending on the context in which it is applied. The definition implies that there could be aspects at both the organizational and individual level that contributes to KM success. In this chapter, we focus on the definition and measurement of the individual level performance outcome in the context of KM success as defined here. In Figure 1 below, we show how individual level KM success relates to organizational level factors. At an organizational level, KM success and the factors that contribute to this success are often used interchangeably in the literature (Jennex et al., 2007). Jennex and Olfman (2005) have shown a need to separate the critical success factors (CSF) from the outcomes of KM success at an organizational level. From their perspective, success factors are aspects of the organization or the en-
Developing Individual Level Outcome Measures in the Context of Knowledge Management Success
vironment that are required for KM to succeed and should be viewed as distinct from the outcomes. They identify twelve such success factors that include knowledge strategy, technical infrastructure, and organizational culture/structure which are crucial to the success of any KM implementation. The organizational level CSF’s affect individual behaviors by creating conducive or adverse work environments. From a KM perspective, these factors enable or deter behaviors related to how individuals manage their knowledge. In a work setting, knowledge management behaviors of an individual can include processes involved in knowledge creation, knowledge sharing, knowledge access, knowledge capture, and use of that knowledge (knowledge application). By engaging in these sustained behaviors or practices, individuals should be able to improve their work related knowledge and reap the associated benefits. These individual level benefits such as productivity gains and innovations form the basis of organizational level outcome or impacts (Delone & McLean, 1992). This is especially pronounced in the context of knowledge work where individual work is primarily interfaced with information systems and KM success is often equated to KMS success. For the vast majority of the respondents, Jennex et al. (2007) found that KM success was viewed as a multidimensional construct including both the process and the outcome. This was also identified by Anantatmula and Kanungo (2006) as KM success factors and KM outcomes where “KM success factors can be viewed as facilitating factors for a KM initiative” (p.27). In developing an outcome measure, it is essential that the process that generates the outcome be delineated as clearly as possible. This enables us to examine the process and, if possible, prescribe interventions to positively affect the outcomes. Accordingly, in this chapter we focus on the individual level KM success outcomes and examine the interrelationships between them, while showing how it relates to the individual level KM processes (Figure 1).
In the definition of KM success proposed by Jennex et al. (2007) knowledge content is an important aspect. For a successful KM process or initiative, it is imperative that such initiatives enhance knowledge content at both individual and organizational levels. This implies that KM success necessitates that individuals in the organization become more knowledgeable and transforms that knowledge to performance gains that are of value to such organizations. To capture individuals’ task-related knowledge in an organizational context, we conceptualized task knowledge as having conceptual, contextual, and operational factors based on Yoshioka, Herman, Yates and Orlikowski’s (2001) knowledge framework for communicative actions. Conceptual knowledge (know-why) is an individual’s understanding of why specific actions need to be taken to complete the task (Kim, 1993; Schultze & Leidner, 2002). Contextual knowledge is an individual’s understanding of the contextual factors surrounding the task at hand, such as the knowledge related to the people (know-who), locations (know-where), and timing (know-when) (Earl, 2001; Pomerol, Brezillon & Pasquier, 2002). Operational knowledge is an individual’s understanding of task requirements (know-what) and the processes of how to accomplish the task (know-how) (Dhaliwal & Benbasat, 1996; Pfeffer & Sutton, 1999). If individuals have the right knowledge at the right time, appropriate value added and creative actions can be enacted. Although possessing knowledge is desirable, individuals should also be able to use the knowledge to make their work more innovative and productive, ultimately adding value to the organization. Thus, in addition to the task knowledge, we focus on innovation and performance. Innovation is the extent to which an individual’s work is novel and creative. Performance is how well an individual’s work is done in terms of efficiency, effectiveness, and quality of work. Figure 1 clarifies the focus of this chapter by showing the individual knowledge management
109
Developing Individual Level Outcome Measures in the Context of Knowledge Management Success
outcomes or success measures in the context of the larger process which shapes these outcomes. We suggest that the organizational contextual factors such as the knowledge management initiatives and systems influence individual level knowledge management practices. These practices lead to enhanced task knowledge and should improve an individual’s innovation and performance. The focus of this chapter is on the KM outcomes of the individual as indicated in the shaded area in Figure 1. These individual knowledge management outcomes may serve as indicators of individual level KM success. In the next section we explore the various aspects of these individual level KM outcomes and their interrelationships.
indiVidual leVel outcoMeS for KM SucceSS As discussed in the earlier section, knowledge content is an important aspect of KM success (Jennex et al., 2007) that we focus on in this chapter. Knowledge itself is a complex construct that is subjected to a wide range of interpretations. Knowledge could be viewed from an organizational perspective or from an individual perspective. Despite a substantial interest in organizational level knowledge and its link to organizational success, there is a widespread recognition that individuals are the essential unit of the knowledgeable entity. Even for knowledge to be considered at the organizational level, knowledgeable individuals within the organization who collect, codify, and share their knowledge need to be considered (Grant, 1996; Davenport & Prusak, 1998). There are several aspects of knowledge such as content, volume, detail, form, value and tacitness/explicitness (Chilton & Bloodgood, 2008). Certain aspects may be more appropriate in certain situations. At other times, a multi-dimensional measure may be more appropriate. KM success at the individual level implies that the individual’s knowledge is enhanced in aspects which generate
110
value to the organization. It may include being better skilled at doing the job or enhancing the intellectual capital (Anantatmula & Kanungo, 2006) or task knowledge of an individual. KM success also implies that individuals translate their knowledge to productive outcomes which the organizations value. Even though organizations often measure their employees knowledge in the context of their work, there are few studies that have tried to measure knowledge in any of its aspects for theory building and testing (with a notable exception of Chilton & Bloodgood, 2008). In this chapter we focus on the individual knowledge as content in the context of their work and call this task knowledge. Further, the relationship between their task knowledge and other productive outcomes such as the innovativeness of their work and their performance are also explored.
dimensions of task Knowledge Traditionally, task knowledge is measured based on skill tests or tests that are specific to each kind of job. This approach might be appropriate in certain situations but is limited as a broad measure applicable across a wide range of tasks. This is similar to the tests that students take at the end of a particular course to assess their learning during a given period of time. Such assessment is limited in usefulness for research that is designed to test substantive relationships among broad measures for building or testing theory. Further, the assessment itself is limited to the knowledge contained in such tests and the knowledge base largely needs to be defined a priori. Such a priori and narrow definition of one’s knowledge base may not be realistically achieved in a constantly changing environment on an ongoing basis (Cohen, 1998), and may also be dependent on the situation to which it is being applied (Chilton & Bloodgood, 2008). Here we define task knowledge as what an individual knows in relation to a particular task at a specific point in time; equating it to what one’s
Developing Individual Level Outcome Measures in the Context of Knowledge Management Success
Figure 2. Knowledge management outcomes, definitions and relevant literature
mind holds as his/her mental model (Kim, 1993). Building upon the 5W1H paradigm of communicative questions why, who, when, where, what, and how (Yoshioka et al., 2001), we conceptualize knowledge pertaining to a task to be traceable to these questions. These questions probe the conceptual, contextual, and operational knowledge involved in a task (see Figure 2). Conceptual knowledge pertaining to a task is the deeper understanding of why the person is engaged that task and why it has to be done the way it is expected to be performed by that individual. This type of knowledge is often referred to as know-why (Agarwal, Krudys & Tanniru, 1997; Garud, 1997; Schultze & Leidner, 2002). Wiig and Jooste (2004) point to the importance
of such conceptual knowledge when they refer to the metaknowledge in their classification of task knowledge. According to Kim (1993), know-why implies the ability to articulate a conceptual understanding of an experience. It can be viewed as the basic framework that helps individuals to build or connect other information as they create mental models of how the world operates. Know-why is also sometimes referred to more broadly as the understanding of the principles and laws of nature in human mind and in society (Johnson, Lorenz & Lundvall, 2002). Contextual knowledge in relation to a task is the knowledge that may not be necessarily central to the satisfactory execution of that task, but may be peripherally related to it. It may be considered
111
Developing Individual Level Outcome Measures in the Context of Knowledge Management Success
as the background knowledge with respect to a particular task (Pomerol et al., 2002). Often this knowledge is centered on the (1) knowledge regarding the people that may be involved or affected by that task (know-who: for example, knowledge regarding the customers and stakeholders) or such information as who knows what and who knows what to do (Johnson et al., 2002; Rulke & Galaskiewicz, 2000), (2) knowledge regarding the location of the task or the location of the information about the task (know-where: for example, where can I get appropriate resources to accomplish the task), and (3) the knowledge regarding the temporal aspects of the task (knowwhen: for example, when should each aspect of the job be done). Such knowledge helps embellish and enrich the operationalization of an act in addition to providing a broader knowledge base for innovative ideas (Earl, 2001). Operational knowledge is the core knowledge that is needed to accomplish a task satisfactorily. This is also sometimes referred to as problemsolving knowledge or domain knowledge (Dhaliwal & Benbasat, 1996). This core minimum knowledge regarding the task involves know-what and know-how, which is sometimes referred to as declarative and procedural knowledge (Garud, 1997; Schultze & Leidner, 2002). Know-what is the knowledge regarding what needs to be done in accomplishing a task successfully (Johnson et al., 2002). Know-how is the knowledge regarding how that task needs to be performed (Johnson et al., 2002). Without at least a cursory idea of this operational knowledge it is unlikely that the individual will be able to complete his/her tasks satisfactorily (Kogut & Zander, 1992; Nonaka & Takeuchi, 1995; Pfeffer & Sutton, 1999).
Productivity benefits This study examines a model of KM success at the individual level and hence individuals are the unit of analysis. In our model of KM success, in addition to the task knowledge we consider individual
112
performance and innovativeness of an individual’s work. Performance and innovativeness are well accepted measures that are used in the literature to assess knowledge worker success. Organizations also value employees who are knowledgeable in their work. They value them not necessarily by virtue of their knowledge alone but because they expect such individuals to be productive in what they do. However, the value of knowledge gained is often difficult to measure directly (Cohen, 1998). Therefore, existing studies have used more indirect measures when assessing the impact of knowledge related activities (Janz & Prasarnphanich, 2003, 2005). Our measures of task knowledge would be a contribution to the literature in this respect and would have a greater validity as a component of individual level KM success if it enhances the more traditional measures of KM success such as performance and innovation. Innovation is one of the important individual activities through which organizations create value (Scott & Bruce, 1994; Van De Ven, 1986). Scott and Bruce (1994) find that creativity and innovation are often used interchangeably. They argue that innovation is not only the creation of new ideas but also the use of such ideas to create new work productions or processes. Creativity is a central aspect of all innovations. Creativity is often defined as the production of ideas, products and procedures that are novel and useful to the organization (Amabile, 1996; Madjar, Oldham & Pratt, 2002), as opposed to creative behavioral traits of the individual. Accordingly, the focus here is on the novelty of the external artifact rather than the internal behavioral trait. It may involve recombination of existing ideas, materials, and processes or introducing new ideas, materials, and processes (Madjar et al., 2002). Having the right knowledge should enhance quality and reduce the variability of task performance (March, 1991). For example, in a new product development context, existing knowledge of the firm, conceptualized as organizational memory, is found to affect information acquisi-
Developing Individual Level Outcome Measures in the Context of Knowledge Management Success
Figure 3. Relationships among individual knowledge management success outcomes
tion efficiency resulting in new product performance (Brockman & Morgan, 2003). In a study of IS professionals in knowledge management context, Janz and Prasarnphanich (2003) used team performance along three dimensions of efficiency, effectiveness and timeliness. This was based on outcome measures primarily conducted in job characteristic studies and learning, and can be applicable to both individual and team levels (Edmondson, 1999; Hackman & Oldham, 1980). Here we adapt Janz and Prasarnphanich’s measure of team performance and operationalize individual knowledge worker performance comprising of efficiency, effectiveness and quality of work.
relationSHiP aMong KM SucceSS outcoMeS Figure 3 shows the relationships among the individual level KM success outcomes considered in this chapter and the subsequent research model, which include task knowledge and individual productivity benefits. The types of task knowledge include conceptual, contextual, and operational knowledge. Productivity benefits include innovativeness of an individual’s work and performance outcomes. Subsequent discus-
sions explore these relationships and propose the associated hypotheses. Not all the three types of task knowledge may have a direct impact on performance and innovation. Clearly, conceptual knowledge provides the basic mental framework that is needed to build and acquire other types of knowledge that may be needed to accomplish various tasks. In describing individual learning, Kim (1993) describes learning leading to conceptual knowledge as that which challenges the “very nature or existence of prevailing conditions, procedures, or conceptions and leading to new frameworks in the mental model” (p.40). Conceptual knowledge orients individuals with the world in which they interact and thus helps to integrate other types of knowledge into their mental models. We therefore, model conceptual knowledge as that which primarily impacts the other forms of knowledge. In a process model, like Kim’s (1993) OADI model, operational and conceptual learning contributes to each other. It is possible to envision the three types of task knowledge as contributing to each other. In a causal model, it is more plausible to view conceptual knowledge as a more primal form of knowledge that drives operational and contextual knowledge. Our model of individual knowledge management success outcomes (see Figure 3 below) uses a causal model perspective.
113
Developing Individual Level Outcome Measures in the Context of Knowledge Management Success
It hypothesizes only the most likely and theoretically prominent linkages from task knowledge constructs to productivity benefits. Conceptual knowledge, which is a deeper and broader understanding of why a task is performed, may not always be necessary to accomplish many aspects of a knowledge worker’s job satisfactorily. But having such knowledge provides a sense of purpose and motivation in performing the task in the best possible manner by enhancing know-what and know-how (Agarwal et al., 1997). This broader understanding also helps the individual contextualize his or her actions in the larger scheme of things, and helps draw on appropriate and useful information in novel and useful ways (Kim, 1993). Conceptual knowledge helps the individual look at his/her actions from higher levels of abstraction. Being able to conceptualize the task from a higher level of abstraction means being able to make richer connections with other knowledge that may or may not be immediately necessary for the execution of the task at hand, and hence, enabling the creation of a richer context for the execution of that task (Gasson, 2005; Johnson et al., 2002). Thus we contend: H1:The higher the conceptual knowledge of an individual, the higher the operational knowledge of the individual. H2:The higher the conceptual knowledge of an individual, the higher the contextual knowledge of the individual. Know-who, know-where, and know-when knowledge creates a rich background for individual actions to take place. Even in situations where the task is primarily centered on this type of knowledge, there still exists a potential to draw upon more of such background information. Such knowledge helps in contextualizing and enriching the primary information that individuals need to use in any of their organizational actions (Johnson et al., 2002). The greater contextual knowledge
114
individuals can bring to bear, the better the individuals can embellish their direct task-related knowledge. Especially in today’s knowledge intensive environment there is an increasing need to combine knowledge from multiple domains (Gasson, 2005). Thus, we hypothesize: H3:The higher the contextual knowledge of an individual, the higher the operational knowledge of the individual. A key aspect of being innovative in the work place is the ability to generate and apply creative and useful ideas in one’s work. Creative artifacts originate from ideas in an individual’s mind. Novelty is the hallmark of a creative production and requires that individuals connect disparate knowledge in novel ways in their minds. Wiig and Jooste (2004) indicate that having broader task knowledge should provide more innovative work outcomes. Rich contextual knowledge provides the potential for the individuals to draw upon seemingly unimportant data to connect them in novel ways to the task at hand. Knowing who the stakeholders are and understanding their needs and expectations can positively contribute to making the outcomes of a knowledge worker’s actions useful, novel, and interesting. Being able to easily access knowledge about where to get appropriate resources and information regarding a particular task, and knowing when to use such information and take appropriate actions can help the individuals ease the task of performing those actions. This frees their mental prowess for more creative work. Further, it is often essential to make use of disparate, multi-domain contextual information to produce hybrid and novel solutions (Engestrom, Engestrom & Karkkainen, 1995). Thus, we contend: H4:The higher the contextual knowledge of an individual, the higher the innovativeness of the individual’s work.
Developing Individual Level Outcome Measures in the Context of Knowledge Management Success
Operational knowledge is the primary knowledge that an individual needs to have in performing the task. This knowledge includes knowing what needs to be done to accomplish a task and how to do it. When an individual has such information readily accessible to his or her mind, performing the task becomes substantially effortless. Wiig and Jooste (2004) contend that having such knowledge and understanding provides workers with the basic ability to be efficient. When more such information is available, the implementation of such actions becomes more effective and efficient. In this case, the individual can focus on performing the task with efficiency, effectiveness, and quality. H5:The higher the operational knowledge of an individual, the higher the individual’s work performance. Organizational productivity gains are achieved by making people work more efficiently through many work improvements including better innovations (Wiig & Jooste, 2004). Especially, in non-routine work such as in knowledge work, innovations of individuals create procedures and artifacts that help them accomplish the task faster and more effectively (Scott & Bruce, 1994; Van De Ven, 1986). Over time, even small innovations in work can accumulate to produce significant performance improvement for the individual and the firm. Thus, we hypothesize: H6:The higher the innovativeness of an individual’s work, the higher will be the individual’s work performance.
reSearcH MetHodS A cross-sectional survey design was used to collect the data to test our model. In developing and refining the new measures, a pretest of the measurement items were conducted followed by a pilot test. Pilot test involved a small scale data
collection and assessment of validity, dimensionality, and reliability of the scales. Subsequently, a large scale data collection targeting managerial and professional knowledge workers was implemented using a web-based questionnaire. The following sections briefly describe the pilot, the large scale sample, and the measurement development. The structural equation modeling software package LISREL is employed for measurement assessment and for testing the structural model and hypotheses.
Pilot testing A pilot test was performed based on 53 responses obtained out of 68 survey requests from knowledge workers in the United States. Twenty four responses were received from the individuals working in the various functions within organizations involved in design, manufacturing or consulting and the rest of the 29 responses were received from MBA students working for various manufacturing firms. The respondents were identified by their managers or themselves as knowledge workers who used information technology heavily for their daily work. The pilot stage data analysis involved item purification using corrected item-total correlation (CITC) scores, evaluation of unidimensionality using principal component factor analysis, evaluation of convergent and discriminant validity using structural equation modeling, and reliability assessment using Cronbach’s alpha. Items pertaining to each construct were modified or eliminated based on the feedback from the pilot results.
large Scale Sample To implement the large scale data collection, a web based survey was implemented. Compared to a traditional mail survey, the web based survey is faster, involves no data transcription errors, and is less costly. An email list from Manufacturer’s News Inc targeting individuals in engineering, management, or information technology functions
115
Developing Individual Level Outcome Measures in the Context of Knowledge Management Success
within U.S. manufacturing industries (represented by NAICS codes starting with 31, 32 and 33) were used. Individuals in these areas are generally considered as knowledge workers and represented our target population. This Manufacturer’s New Inc list includes a wide variety of knowledge workers within manufacturing firms. We used this email list because of the versatility that it offered in selecting the target respondents. This data was collected as part of the requirements for a Ph.D. degree in manufacturing management. This list was used because the data set had to be of broad interest and relevance to manufacturing executives. Email lists from such vendors can have a large proportion of emails that are not current or individual specific and may not effectively reach the intended respondent in spite of their claims of various efforts to ensure quality of the list. In order to mitigate this problem to some extent, we implemented tracking of click-throughs on the site hosting the survey based on the email invitations sent out. Click-throughs represent all the individuals that clicked on the survey link in the email including those who did not attempt the survey. This would better represent the actual number of individuals the email request reached since such unsolicited emails for survey have a tendency to be automatically junked or blocked by organizational firewalls. Because of the traditionally low responses that are typical of such open email lists, 24,279 emails were sent out from this list, out of which 9,386 were returned undeliverable due to non-existent emails or server errors. After administering two waves of emails, 252 usable and complete responses were obtained (140 in the first wave and 112 in the second wave) yielding a 31.6% response rate based on the 797 click-throughs, and 1.7% based on the number of emails sent without counting the undeliverables. Respondents include individuals from a wide range and size of industries. The majority of the respondents were professionals, middle management, or executive positions. Non-response bias is
116
evaluated using a Chi-square test of goodness-offit of various demographic variables between the first and second wave of data collection (Smith, 1983). Results indicated no significant difference (p-value > 0.10) between the various demographic variables. Measures were then evaluated in steps similar to the pilot stage involving item purification, evaluation of factor structure, unidimensionality, and convergent and discriminant validity.
Measures Respondents were asked to answer the survey items based on a particular project or an assignment, or based on their work during the last six months if they did not typically work on a specific project. Providing a more specific framework as mentioned above was expected to help respondents recall the work situation and answer the questions with a more consistent frame of reference. It is important to provide such a consistent framework to elicit the level of respondents’ knowledge within the specified duration because conceptual and contextual task-knowledge at any given time may be the result of knowledge that may have been accumulated over a long period of time, whereas the operational knowledge is often acquired closer to when the task needs to be performed. The specific measures for the three dimensions of task knowledge uses a five point Likert type scale where 1= “None or to a very little extent” and 5= “To a very great extent” (see Appendix A). Innovation was measured using three items (see Appendix A) based on Oldham and Cummings’ (1996) creative performance and Scott and Bruce’s (1994) innovative behavior with a focus on the work outcome. The performance measure was adapted from Janz and Prasarnphanich’s (2003) measure used in a team performance context. Here it was adapted to focus on the individual as the unit of analysis. For innovation, a seven point Likert type scale ranging from 1= “Not at all” to 7= “To an exceptionally high degree” was used. A scale ranging
Developing Individual Level Outcome Measures in the Context of Knowledge Management Success
from 1= “Strongly disagree” to 7= “Strongly agree” was used for performance. The final items for each construct after purification and measurement analysis are listed in Appendix A.
reSultS First, the data was examined using exploratory factor analysis to validate that the five factors had a simple structure. Next, data was analyzed using LISREL in a two step process (Anderson & Gerbing, 1988) where (a) the measurement model is evaluated and then (b) the structural model is evaluated. In step (a) descriptive statistics were presented along with the analysis of reliability, convergent validity and discriminant validity of the measures. In step (b), the structural model was evaluated to test the substantive hypotheses H1 through H6. Common method bias introduced due to the measurement of both independent and dependent variable using a single source is an important issue in similar studies. Several procedural and statistical approaches are available to minimize or even eliminate such bias (Podsakoff et al., 2003). We used these procedural approaches such as providing anonymity in responding to the survey, improving the scale items through piloting it before the large scale, and counterbalancing by randomizing the order of dependent and independent variables. Statistical remedy suggested by Podsakoff et al. (2003) by using Harman’s single factor test also suggested that the common method bias was not a significant issue.
factor analysis All the items were factor analyzed with five factor specified, oblimin rotation, and maximum likelihood extraction. The five factor solution yielded a simple structure with all the items loading on their respective factors. Figure 4 below reported the results of factor analysis with the factors sorted
in the descending order with the strongest factor listed first. No cross loadings were above 0.30. The two lowest factor loadings were for items OPER3 (-0.517) and CONT5 (0.576). All the other items loaded on their respective constructs with loadings above 0.60.
Measurement Model results The descriptive statistics, Cronbach’s alpha, average variance extracted (AVE) and correlations between the variables were reported in Figure 5 below. The means ranged from 3.99 for contextual knowledge (on a 5 point scale) to 5.85 for performance (on a 7 point scale). The standard deviations ranged from 0.66 for contextual knowledge to 1.24 for innovation. The skewness values were between -2 and +2 and the kurtosis values were all lower than 5.0, suggesting that the scales do not violate the assumption of normality. The Cronbach’s alpha indicated adequate reliability and ranged from 0.81 for operational knowledge to 0.94 for conceptual knowledge. Correlations ranged from 0.14 to 0.59, and were all significant at p-value < 0.01 except for the correlation between operational knowledge and innovation (0.14) which was significant at pvalue < 0.05. AVE scores ranged from 0.53 for contextual knowledge to 0.80 for conceptual knowledge. Scores above 0.50 are an indication of convergent validity. Convergent validity was also assessed by how well the items load on their respective latent variable. Figure 6 below shows standardized item-factor loadings for all the five constructs. All standardized item-factor loadings were 0.70 or higher, except for one item for contextual knowledge (which had a loading of 0.69), indicating good convergent validity for the items measuring each of these constructs. Further, all loadings were significant at p-value < 0.01. An analysis of AVE scores and the squared correlations in Figure 5 indicated that the AVE scores were greater than the square of the correla-
117
Developing Individual Level Outcome Measures in the Context of Knowledge Management Success
Figure 4. Exploratory factor analysis: Pattern matrix with maximum likelihood extraction and oblimin rotation
Figure 5. Reliability, convergent validity and discriminant validity of task knowledge and performance measures
118
Developing Individual Level Outcome Measures in the Context of Knowledge Management Success
Figure 6. Combined Measurement and Structural Model: Standardized Solution in LISREL
tion between the focal factor and other factors, suggesting adequate discriminant validity. A more rigorous chi-square (χ2) test of discriminant validity indicates whether a unidimensional rather than a two-dimensional model can account for the inter-correlations among the observed items in each pair. For ten comparisons, the chi-square value for the test of discriminant validity between pairs of latent factors must be equal to or greater than 10.83 for significance at p-value 0.01. Chisquare difference between the correlated model and the measurement model with correlations fixed to 1 indicated that all values were significant at p < 0.01 (see Figure 5 above), suggesting discriminant validity between all pairs. The five factor correlated measurement model was judged to have good model-data fit with χ2 = 186.47 for 125 degrees of freedom (chi-square per degree of freedom = 1.49), Root Mean Square Error of Approximation (RMSEA) = 0.044, NonNormed Fit Index (NNFI) = 0.97, and Compara-
tive Fit Index (CFI) = 0.98. Values of RMSEA less than 0.05, and NNFI and CFI values above 0.95 indicate good model-data fit (Hu & Bentler, 1999). All items had item-factor loadings greater than 0.69 (p-value < 0.01). The largest modification index (12.05) in the structural model was an error correlation between two items in conceptual knowledge. The expected value of the change for this modification is only 0.05. There were also two cross-loadings to conceptual knowledge, the expected value of change for the larger one of the two was 0.19, indicating a relatively weak cross loading as compared to a much stronger standardized loading on the respective constructs.
Structural Model results In order to test the substantive hypotheses a combined measurement and structural LISREL model was developed (see Figure 6 below). The result of this analysis was used to accept or reject
119
Developing Individual Level Outcome Measures in the Context of Knowledge Management Success
the hypotheses based on the significance of the standardized structural coefficients of the relationships. In order to evaluate the significance of the structural coefficients, a reasonable model-data fit is necessary and was evidenced based on the various fit statistics. Examination of the various absolute and relative fit indices indicated good model-data fit with a chi-square of 197.58 for 129 degrees of freedom, chi-square per degrees of freedom = 1.53, p-value = 0.0001, RMSEA = 0.046, NNFI = 0.97, and CFI = 0.98. Figure 6 below illustrates the structural relationships (γ) between the exogenous variable conceptual knowledge and the endogenous variables (η) operational knowledge and contextual knowledge. It also depicts the structural relationships (β) between operational knowledge and contextual knowledge with performance and innovation. Evaluation of modification indices indicated three correlations among error terms. These correlated error terms were relatively weak with the largest modification index being 11.43 between CONC3 and CONC4. No structural modifications or crossloadings were suggested. Given the good model-data fit, the proposed hypotheses were evaluated. All the hypotheses proposed were supported (p-value <0.01) by the results of the data analysis. It is widely accepted that having better knowledge can improve one’s work performance. This research investigated the types of task knowledge that can impact specific individual work outcomes and the possible inter-relationship between types of task knowledge. Conceptual knowledge had a strong path coefficient to both operational knowledge (γ = 0.24, t = 3.03) and contextual knowledge (γ = 0.63, t = 8.38). Thus, Hypothesis H1 and H2, indicating that a higher level of conceptual knowledge will positively enhance the operational and contextual knowledge of the individual were supported. The results confirmed the widely held belief that knowledge related to a deeper understanding of why knowledge workers do certain actions can
120
actually equip them with better knowledge to do such actions. Contextual knowledge also had a strong path coefficient (β = 0.55, t = 5.72) to operational knowledge. Thus, Hypothesis H3 (The higher the contextual knowledge of an individual, the higher the operational knowledge of the individual.) was supported. This suggests that if individuals are able to draw on a richer context for a given task, the operational knowledge needed to perform that task can be enhanced. Hypothesized relationships leading to innovation and performance were examined next (i.e., H4, H5, and H6). Contextual knowledge had a strong path coefficient (β = 0.28, t = 3.86) to innovation. Hence, hypothesis H4 was supported, suggesting that contextual knowledge positively enhances the innovativeness of an individual’s work. This implies that as more contextual knowledge is available to individuals; their work outcomes may become more innovative. In other words, they are able to generate innovative ideas based upon a greater amount of varied and disparate information on the context of the specific tasks. Operational knowledge had a strong path coefficient (β = 0.43, t = 5.86) to performance supporting H5. This suggests that operational knowledge is a key knowledge for impacting knowledge worker performance. Hypotheses H6, representing the impact of innovation on performance was also supported (β = 0.36, t = 5.44), indicating that innovative work outcomes enable knowledge workers to be more efficient, effective, and produce higher quality outcomes.
diScuSSion Knowledge management success is a broad concept when viewed from an organizational perspective. Previous literature has identified various factors that are considered to be critical for KM success (Jennex & Olfman, 2005). This includes various organizational and individual characteristics along with aspects of the KM
Developing Individual Level Outcome Measures in the Context of Knowledge Management Success
implementation environment that influence KM success. These are factors that contribute to an organization’s success at building knowledgebased competencies or taking advantage of the knowledge created by individuals. Knowledge is primarily created by individuals and then shared among a community of knowing in an organization. To understand and evaluate how these factors contribute to the individual knowledge and subsequently to KM success, it is imperative that we are able to measure individual knowledge and understand how it contributes to other individual success outcomes. In the context of how key success factors for KM impact individual behavior, this chapter provides a model of KM success at the individual level. The model describes how task knowledge affects productivity benefits for individual knowledge workers. We provide a measure of task knowledge as it relates to the individual’s work. Task knowledge consists of three dimensions: conceptual, contextual and operational knowledge. Increased task knowledge is considered to be the enhancement of an individual’s mental model of frameworks and routines related to their work. Hypotheses concerning the interrelationship between these task knowledge dimensions and their subsequent effect on other net benefits such as innovation and performance are tested. The results indicate that conceptual knowledge has an indirect rather than a direct effect on innovation and performance. Conceptual knowledge works through contextual and operational knowledge to impact innovation and performance, respectively. Thus, conceptual knowledge is a necessary, but not a sufficient condition for achieving benefits such as innovation and performance. In order to ensure that conceptual knowledge is utilized for enhancing innovation and performance, managers need to combine classroom knowledge (conceptual) with on-the-job training where individuals gain operational and contextual knowledge. Innovation and performance improve more rapidly if conceptual
knowledge is combined with some operational experience and some contextual knowledge of the broader work situation. Contextual knowledge helps enhance operational knowledge. Know-what and know-how can be enhanced by having greater knowledge of know-who, know-where, and know-when. This suggests that operational knowledge may not be fully usable for improving performance without a specified context for action. For example, people may know the product development process but their success in this process will be enhanced if they have identified a specific target market, have better knowledge of their customers’ needs, and know when the product needs to be introduced to provide a first-to-market advantage. Previous research has not explicitly conceptualized and measured contextual knowledge or explored its relationships to antecedents and consequences. Future studies should further explore the relationship among contextual knowledge and other important variables in knowledge work. The results indicate that contextual knowledge also plays an important role in enhancing innovation. For example, the knowledge of customer requirements will help knowledge workers identify innovations that meet or exceed customer expectations. It is likely that individuals are able to quickly generate novel ideas to be more innovative because contextual knowledge provides a richer background to connect disparate ideas. The results suggest that performance is improved by either doing things more efficiently or more innovatively. In order to improve knowledge workers’ performance, managers need to consider how to mix both rational and experiential approaches to integrate working and learning. In the context of product development, Eisenhardt and Tabrizi (1995) distinguish between a rational and an experiential approach to improving performance (e.g., product development time). The rational approach involves planning operational processes carefully before execution to avoid delays and to speed development, but is limited
121
Developing Individual Level Outcome Measures in the Context of Knowledge Management Success
to existing knowledge. The experiential approach uses an iterative cycle of doing and learning to enhance product development time as the individual learns by doing (operational knowledge) and obtains feedback on his/her work and how his/her part of the design work interfaces with that of other engineers (contextual knowledge). A mix of rational and experiential approaches to knowledge work may enable managers to get work done both more efficiently and more innovatively.
liMitationS and future directionS The results should be interpreted with caution as they are based upon one sample. Further research is necessary to cross validate these success measures and the structural relationship among them. If cross validated, these instruments can be used in future research for evaluating the effectiveness of antecedent factors such as KM practices or KMS CSFs. Though it is not uncommon to use self reported measures of performance in this type of research, measuring performance outcomes using more objective metrics can substantially increase confidence in the proposed model. Using the task knowledge measures developed here, future research may explore improvements in work outcomes based on time, quality, and quantity dimensions such as actual time-to-completion of respondent tasks. In addition to using objective performance measures, using perceptual measures from supervisors or peers for the dependent variables should also enhance confidence in the proposed model. Because we use a perceptual measure eliciting respondents level of knowledge, only the knowledge that the respondents consciously reflect as their task-related knowledge is measured in this research. It is possible that the work related outcomes are also affected by knowledge that may not be directly related to an individual’s work. For example, when taking up a new job, individu-
122
als bring with them a set of skills that they have developed over time through their previous work and life experiences. This may include processes and thought patterns that have evolved with their earlier experiences. These processes and thought patterns may or may not bear directly on the requirements of the new work. Such knowledge may have impact on their work in subtle ways, either positively or negatively. Future research could focus on assessing the extent of impact of such knowledge that may not be captured in reflective measures as used in this research. The conceptual, contextual, and operational knowledge dimensions seem to be valid for the overall general task-related knowledge when considered for a period of time such as when accomplishing a particular project. Whether or not such a three factor conceptualization of task knowledge can also be demonstrated in other more specific domains of knowledge is uncertain. Similarly, whether the current conceptualization of knowledge used in this research is valid when considering broader and general human knowledge needs of other knowledge workers outside of the manufacturing context needs to be examined. In this chapter, we have focused on knowledge content as a critical aspect of knowledge for determining individual performance and hence KM success. Measurement of other facets of knowledge such as volume, form, and detail could also be explored in future research. For example, Chilton and Bloodgood (2008) examine the tacit-explicit dimension of knowledge and have developed measures for it. Researchers examining the form aspect of knowledge may explore the effectiveness of using and sharing different forms of knowledge representations such as auditory, graphical, video, and textual forms in various situations. Further, the forms of knowledge representation that are effective in specific situations and their interrelationships may also be explored. Another possible avenue for research is to examine the applicability of the current model of knowledge management success measures used
Developing Individual Level Outcome Measures in the Context of Knowledge Management Success
here at the team and organizational levels. The impact of different dimensions of individual task knowledge on other team and organizational level factors such as team-building capability, ability to foster stronger ties with their immediate customer, and contribution to organizational intellectual capital may also be investigated in subsequent research. Though a greater data collection effort may be needed for such investigations, a wealth of statistical tools such as hierarchical linear modeling should facilitate such multilevel analysis.
referenceS
concluSion
Amabile, T. (1996). Creativity in context. Boulder, CO: Westview Press.
In this chapter, we have developed and tested a model of KM success measures at the individual level and explored their interrelationships. The model is developed based on the conceptualization of individuals’ work place knowledge as task knowledge. It explores task knowledge’s relationship to other work place outcomes such as innovation and performance. Task knowledge is successfully modeled as consisting of three dimensions - conceptual, contextual, and operational knowledge. Conceptual knowledge is found to have a significant impact on contextual and operational knowledge. Operational knowledge is also impacted by contextual knowledge. Contextual knowledge has the greatest impact on innovation. Individual performance is affected by operational knowledge and innovativeness. The valid and reliable measurement instruments developed for evaluating KM success at the individual level should be an important tool for advancing empirical research in knowledge management. The instruments are short and easy to use in other studies of KM success.
Agarwal, R., Krudys, G., & Tanniru, M. (1997). Infusing learning into an information systems organization. European Journal of Information Systems, 6(1), 25–40. doi:10.1057/palgrave. ejis.3000257 Alavi, M., & Leidner, D. E. (2001). Knowledge management and knowledge management systems: Conceptual foundations and research issues. Management Information Systems Quarterly, 25(1), 107–133. doi:10.2307/3250961
Anantatmula, V., & Kanungo, S. (2006). Structuring the underlying relations among the knowledge management outcomes. Journal of Knowledge Management, 10(4), 25–42. doi:10.1108/13673270610679345 Anderson, J. C., & Gerbing, D. W. (1988). Structural equation modeling in practice: A review and recommended two-step approach. Psychological Bulletin, 103(3), 411–423. doi:10.1037/00332909.103.3.411 Brockman, B. K., & Morgan, R. M. (2003). The role of existing knowledge in new product innovativeness and performance. Decision Sciences, 34(2), 385–420. doi:10.1111/1540-5915.02326 Chilton, M. A., & Bloodgood, J. M. (2007). The dimensions of tacit & explicit knowledge: A description and measure. In Proceedings of the 40th Annual Hawaii International Conference on System Sciences (HICSS 2007), Hawaii, USA. Chilton, M. A., & Bloodgood, J. M. (2008). The dimensions of tacit & explicit knowledge: A description and measure. International Journal of Knowledge Management, 4(2), 75–91.
123
Developing Individual Level Outcome Measures in the Context of Knowledge Management Success
Cohen, D. (1998). Toward a knowledge context: Report on the first annual U.C. Berkeley forum on knowledge and the firm. California Management Review, 40(3), 22–40.
Garud, R. (1997). On the distinction between know-how, know-what and know-why. In Huff, A., & Walsh, J. (Eds.), Advances in Strategic Management (pp. 81–101). Greenwich, CT: JAI Press.
Davenport, T. H., & Prusak, L. (1998). Working knowledge: How organizations manage what they know. Boston: Harvard Business School Press.
Gasson, S. (2005). The dynamics of sensemaking, knowledge, and expertise in collaborative, boundary-spanning design. Journal of ComputerMediated Communication, 10(4).
DeLone, W. H., & McLean, E. R. (1992). Information systems success: The quest for the dependent variable. Information Systems Research, 3(1), 60–95. doi:10.1287/isre.3.1.60
Grant, R. M. (1996). Toward a knowledge-based theory of the firm. Strategic Management Journal, 17(Special Issue), 109–122.
DeLone, W. H., & McLean, E. R. (2003). The Delone and Mclean model of information system success: A ten-year update. Journal of Management Information Systems, 19(4), 9–30.
Grover, V., & Davenport, T. H. (2001). General perspectives on knowledge management: fostering a research agenda. Journal of Management Information Systems, 18(1), 5–17.
Dhaliwal, J., & Benbasat, I. (1996). The use and effects of knowledge-based system explanations: Theoretical foundations and a framework for empirical evaluation. Information Systems Research, 7(3), 342–362. doi:10.1287/isre.7.3.342
Guo, Z., & Sheffield, J. (2006). A paradigmatic and methodological examination of KM research: 2000 to 2004. In Proceedings of the 39th Annual Hawaii International Conference on System Sciences (HICSS 2006), Hawaii, USA.
Earl, M. (2001). Knowledge management strategies: Toward a taxonomy. Journal of Management Information Systems, 18(1), 215–234.
Hackman, J. R., & Oldham, G. R. (1980). Work redesign. Reading, MA: Addison-Wesley.
Edmondson, A. (1999). Psychological safety and learning behavior in work teams. Administrative Science Quarterly, 44(2), 350–384. doi:10.2307/2666999
Hu, L., & Bentler, P. M. (1999). Cutoff criteria for fit indexes in covariance structure analysis: Conventional criteria versus new alternatives. Structural Equation Modeling, 6(1), 1–55. doi:10.1080/10705519909540118
Eisenhardt, K. M., & Tabrizi, B. N. (1995). Accelerating adaptive processes: Product innovation in the global computer industry. Administrative Science Quarterly, 40(1), 84–110. doi:10.2307/2393701
Janz, B. D., & Prasarnphanich, P. (2003). Understanding the antecedents of effective knowledge management: The importance of a knowledgecentered culture. Decision Sciences, 34(2), 351–384. doi:10.1111/1540-5915.02328
Engestrom, Y., Engestrom, R., & Karkkainen, M. (1995). Polycontextuality and boundary crossing in expert cognition: Learning and problem solving in complex work activities. Learning and Instruction, 5(4), 319–336. doi:10.1016/09594752(95)00021-6
Jennex, M. E., & Olfman, L. (2005). Assessing knowledge management success. International Journal of Knowledge Management, 1(2), 33–49.
124
Jennex, M. E., & Olfman, L. (2006). A model of knowledge management success. International Journal of Knowledge Management, 2(3), 51–68.
Developing Individual Level Outcome Measures in the Context of Knowledge Management Success
Jennex, M. E., Smolnik, S., & Croasdell, D. T. (2007). Towards defining knowledge management success. Proceedings of the 40th Annual Hawaii International Conference on System Sciences (HICSS 2007), Hawaii, USA. Johnson, B., Lorenz, E., & Lundvall, B. A. (2002). Why all this fuss about codified and tacit knowledge? Industrial and Corporate Change, 11(2), 245–262. doi:10.1093/icc/11.2.245 Kim, D. H. (1993). The link between individual and organizational learning. Sloan Management Review, 35(1), 37–51. Kogut, B., & Zander, U. (1992). Knowledge of the firm, combinative capabilities, and the replication of technology. Organization Science, 3(3), 383–398. doi:10.1287/orsc.3.3.383 Kulkarni, U. R., Ravindran, S., & Freeze, R.(2006-7). A knowledge management success model: Theoretical development and empirical validation. Journal of Management Information Systems, 23(3), 309–347. doi:10.2753/MIS07421222230311
Pfeffer, J., & Sutton, R. I. (1999). Knowing ‘what’ to do is not enough: Turning knowledge into action. California Management Review, 42(1), 83–109. Podsakoff, P. M., MacKenzie, S. B., Lee, J. Y., & Podsakoff, N. P. (2003). Common method biases in behavioral research: A critical review of the literature and recommended remedies. The Journal of Applied Psychology, 88(5), 879–903. doi:10.1037/0021-9010.88.5.879 Pomerol, J., Brezillon, P., & Pasquier, L. (2002). Operational knowledge representation for practical decision-making. Journal of Management Information Systems, 18(4), 101–115. Rulke, D. L., & Galaskiewicz, J. (2000). Distribution of knowledge, group network structure, and group performance. Management Science, 46(5), 612–625. doi:10.1287/mnsc.46.5.612.12052 Schultze, U., & Leidner, D. (2002). Studying knowledge management in information systems research: Discourses and theoretical assumptions. Management Information Systems Quarterly, 26(3), 213–242. doi:10.2307/4132331
Madjar, N., Oldham, G. R., & Pratt, M. G. (2002). There’s no place like home? The contributions of work and nonwork creativity support to employees’ creative performance. Academy of Management Journal, 45(4), 757–768. doi:10.2307/3069309
Scott, S. G., & Bruce, R. A. (1994). Determinants of innovative behavioral path model of individual innovation in the workplace. Academy of Management Journal, 37(3), 580–607. doi:10.2307/256701
March, J. G. (1991). Exploration and exploitation in organizational learning. Organization Science, 2(1), 71–87. doi:10.1287/orsc.2.1.71
Smith, T. W. (1983). The hidden 25 percent: An analysis of nonresponse on the 1980 general social survey. Public Opinion Quarterly, 47(3), 386–404. doi:10.1086/268797
Nonaka, I., & Takeuchi, H. (1995). The knowledgecreating company: how Japanese companies create the dynamics of innovation. New York: Oxford University Press. Oldham, G. R., & Cummings, A. (1996). Employee creativity: Personal and contextual factors at work. Academy of Management Journal, 39(3), 607–635. doi:10.2307/256657
Van De Ven, A. (1986). Central problems in the management of innovation. Management Science, 32(5), 570–607. doi:10.1287/mnsc.32.5.590 Wiig, K., & Jooste, A. (2004). Chapter 45: Exploiting knowledge for productivity gains. In C. W. Holsapple (Eds.), Handbook on knowledge management vol.2: Knowledge directions, 289-308. New York: Springer Science and Business Media.
125
Developing Individual Level Outcome Measures in the Context of Knowledge Management Success
Wu, J., & Wang, Y. (2006). Measuring KMS success: A respecification of the Delone and Mclean’s model. Information & Management, 43(6), 728–739. doi:10.1016/j.im.2006.05.002 Yoshioka, T., Herman, G., Yates, J., & Orlikowski, W. J. (2001). Genre taxonomy: A knowledge repository of communicative actions. ACM Transactions on Information Systems, 19(4), 431–456. doi:10.1145/502795.502798
endnote 1
126
A previous version of this article was presented at the January 7-10, 2008, 41st Annual Hawaii International Conference on System Sciences.
Developing Individual Level Outcome Measures in the Context of Knowledge Management Success
aPPendix a: MeaSureMent iteMS for taSK Knowledge and PerforMance outcoMeS
127
128
Chapter 8
Validating Distinct Knowledge Assets: A Capability Perspective Ron Freeze Emporia State University, USA Uday Kulkarni Arizona State University, USA
abStract Identification and measurement of organizational Knowledge Management capabilities is necessary to determine the extent to which an organization utilizes its knowledge assets. We developed and operationalized a set of constructs to measure capabilities associated with management of knowledge assets identified as distinct Knowledge Capabilities (KCs) comprising the overall Knowledge Management (KM) capability of an organizational unit. Each KC represents a distinct kind of knowledge that requires different organizational process and technological support. This delineation of knowledge allows targeted improvement to a specific KC. We present validation of these capability constructs with empirical evidence from two separate business units in a large semi-conductor manufacturing company, providing the basis of measurement standardization for KM Capability improvement. Confirmatory factor analysis affirmed four KCs, each identified as an overall factor influencing a set of latent descriptor variables. Second Order and General-Specific Structural Equation Models of each capability provide evidence as to the validity of measurement of these knowledge assets. A standardized instrument for measuring knowledge capabilities would not only allow benchmarking, but also allow tracking capabilities over time and linking them to those performance metrics that are deemed appropriate by the organization.
introduction The quest to leverage knowledge assets through effective Knowledge Management (KM) is a strategic initiative for many firms. Management DOI: 10.4018/978-1-60566-709-6.ch008
literature has noted the lack of effective management of knowledge and called for establishing quantitative measures for these intangible assets (Teece, 1998; Zack, 1999b). Unfortunately, most KM initiatives in reality have been information projects that result in only the consolidation of data and not much by way of improvements in
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
Validating Distinct Knowledge Assets
knowledge flows or knowledge sharing (Gold et al., 2001). In an attempt to assess the contribution of IS/IT initiatives to a firm’s sustainable competitive advantage, researchers in the IS domain have looked at IT resources and capabilities through the lens of the Resource Based View of the firm (Barney, 1991; Melville et al., 2004). Both IS and KM researchers have viewed resource and capability investments as impacting organizational effectiveness (Tanriverdi, 2005). However, a consistent shortcoming has been the inadequacy in measurement of these resources and capabilities (Wade and Hulland, 2004). In order to evaluate KM initiatives and their ability to leverage knowledge assets, firms must focus on the identification and measurement of specific knowledge assets and the capabilities that they represent within an organization. Only through adequate conceptualization of knowledge and measurement of capabilities associated with its management can firms begin to tie knowledge assets to value generating outcomes. Thus, capability measurement is the logical first step in justifying investments in KM projects that can ultimately move the firm towards a sustainable competitive advantage. Knowledge assets are grounded in the experience and expertise of individuals (Teece, 1998). The ability of an organization to use them has been portrayed as a type of organizational capability in prior research. We use the term KM Capability to refer to this overall organizational capability. The conceptual development of KM Capability can benefit from the rich theoretical literature on capability research which associates organizational capabilities (of various kinds) with its performance (Gold et al., 2001; Santhanam and Hartono, 2003; Zhu, 2004). Knowledge is still a highly nebulous and debated concept in business literature and it collectively covers a wide range of intangible assets. For this reason it is desirable to classify this concept into multiple types and try to study the capabilities associated with each type separately.
Research efforts at understanding KM Capability and its association with organizational effectiveness have attempted to define multiple KM related constructs. Gold et al. (2001) proposed that the overall KM Capability consists of knowledge process capability and knowledge infrastructure capability with both impacting organizational effectiveness. In their model, process capability incorporates the stages of lifecycle through which knowledge progresses. The knowledge infrastructure capability includes technology, structure and culture as its building blocks. Their notion of organizational knowledge views all knowledge similarly and fails to recognize different types of knowledge that KM Capability must incorporate. Tanriverdi (2005) presents KM Capability as a second order capability comprised of an organization’s Product, Customer and Managerial KM Capability. Each of these first order constructs comprised four stages of the knowledge lifecycle. In this case, generic forms of the knowledge lifecycle stages have been used for each first order construct which implies a separation of processes. We assert that organizational knowledge covers a wider range of assets and propose that different types of knowledge assets require different organizational processes and technology support to be utilized effectively. Another significant attempt at conceptually defining a framework for measuring KM Capability is the Cognizant Enterprise Maturity Model (CEMM) that introduced the concept of measuring 15 Key Maturity Areas within an organization to improve its business value through KM (Harigopal and Satyadas, 2001). While the CEMM identifies a multitude of knowledge processes through the Key Maturity Areas, knowledge is differentiated only through their discussion of tacit and explicit knowledge. Each of these frameworks has provided valuable steps toward understanding the nature of KM within an organization. However, none have identified separate capabilities in distinct knowledge areas that may be individually
129
Validating Distinct Knowledge Assets
measured and leveraged within a single organization to more effectively meet its objectives. The objective of this research is to further develop and validate a set of Knowledge Capability (KC) measures that accurately capture a firm’s overall KM Capability (Freeze and Kulkarni, 2007). Each KC is the knowledge related capability of an organization within a particular knowledge type. We draw upon previously identified KM application areas such as knowledge repositories, lessons learned, expert networks and communities of practice, etc. (King et al., 2002), and characteristics of different types of knowledge assets to conceptualize the KCs. We developed a Knowledge Management Capability Assessment instrument that operationalizes the KCs. The contribution of this paper lies in extending previous research via improving the granularity of measurement for KM Capability by hypothesizing separate knowledge capabilities, developing and validating the necessary measures, and defining the steps necessary to improve each KC individually and the KM Capability as a whole. We provide validation of the KCs with empirical evidence from two large independent organizational units within a Fortune-50 semiconductor manufacturing company. We begin by reviewing how prior research has viewed the composition of a firm’s knowledge asset structure in Section II. These viewpoints include human capital participation in knowledge related activities, importance of technological factors, lifecycle stages of knowledge, and the tacit/ implicit/explicit nature of knowledge. We use these various viewpoints to recognize the different types of knowledge and the abilities needed to leverage them effectively. This prior conceptual work had a significant influence on the composition of each KC. We identify four important KCs that represent distinct KM capabilities - Lessons Learned, Knowledge Documents, Expertise and Data. These four KCs are sufficiently distinct, encompass a
130
majority of the knowledge capabilities within a firm and arose from the literature review of the various viewpoints elaborated below. While we believe that these four KCs represent a majority of the knowledge capabilities within a firm, other KCs may exist. KM is an evolving concept and its understanding can undergo changes as the maturity of its use increases. Nevertheless, the explication of these KCs should follow the construction criteria identified and may include additional relevant viewpoints. We present these four KCs as tested constructs to measure and improve the KM Capability of an organization. The conceptual development of the KCs was done in a field setting with a Fortune-50 semiconductor manufacturing firm in the south-western United States. The conceptual development, construction of the questionnaire, target population and data collection are described in Section III. Each KC consists of multiple latent descriptor factors that represent an organization’s ability to effectively and efficiently manage this aspect of knowledge. These latent descriptor factors are considered First Order constructs. With the data gathered from two large business units of the company, we conducted a Confirmatory Factor Analysis (CFA) using maximum likelihood factoring and verified the alignment adequacy of items to the posited latent descriptive factors. Results of the convergent/discriminant validity and the CFA confirm the adequacy of the constructs. We then tested each KC using General-Specific and Second Order measurement models. This analysis establishes each capability as a significant construct and validates the measurability of the KC factors. Significance of the measurement models provides the evidence of the reliability of measurement for individual capabilities. These results are presented in Section IV. The concluding remarks in Section V include implications for researchers and practitioners as well as further research directions.
Validating Distinct Knowledge Assets
Knowledge aSSet fraMeworK In spite of the recognized need for creation and utilization of knowledge assets, a standard, well-accepted description of what is knowledge continues to be debated. Some view knowledge as non-existent without the knower (Fahey and Prusak, 1998), while others claim to have successfully captured it into knowledge objects (Carlsson et al., 1996). In order to make effective use of knowledge assets, organizations must be able to first recognize and then assess the true value of these resources. Only when these knowledge assets are clearly identified can the capabilities associated with them be measured and effective managing of knowledge begin. In the following paragraphs, we discuss the perspectives offered in prior research to the concept of knowledge and Knowledge Management.
Human capital and technology Knowledge assets are intangible assets that encompass the knowledge as well as the ability of an organization to leverage that knowledge. They must include the extent to which a firm’s human capital exploits the knowledge. A firm’s employees, also called knowledge workers, are integral to the capabilities associated with the knowledge assets. This perspective views knowledge assets as organizational resources intertwined with the human capital as defined within the resource-based view of the firm. The literature on resource-based strategy treats human capital as one of the key rent-generating (knowledge) assets of a firm (Barney, 1991; Coff, 1997). Technology designed to facilitate the interaction of knowledge with the human capital through each stage of its lifecycle is another important aspect of knowledge capabilities. Davenport et al. (2002) identify a few major factors – management and organization, information technology, among others – that influence the performance of knowl-
edge workers and knowledge-based organizations, emphasizing the interplay of organizational and technological factors in knowledge work.
Knowledge lifecycle A lifecycle or knowledge process oriented view of knowledge assets stipulates that companies that want to develop and use knowledge effectively need to treat knowledge differently according to the stages of its life (Birkinshaw and Sheehan, 2002). KM researchers have regarded these stages of the knowledge lifecycle as: knowledge flows, steps to KM, architectures for explicit knowledge, and knowledge lifecycle (Birkinshaw and Sheehan, 2002; Satyadas et al., 2001; Zack, 1999b). Evaluating the different stages proposed resulted in our viewing the knowledge lifecycle for this research as a four stage acquisition/storage/retrieval/ application cycle. Interaction between a firm’s human capital and knowledge at different stages of its lifecycle is facilitated by the technological factors briefly described below. The actual acquisition of knowledge and decision to transfer resides solely with the capabilities associated with a firm’s human capital. Knowledge can only be acquired if the knowledge worker recognizes its value. The next logical stage in leveraging knowledge assets is to codify and capture this new knowledge in repositories under existing or expanded taxonomies. These processes begin the storage stage of the knowledge assets and create the potential to use that captured knowledge. The retrieval stage is the result of the decision to reuse/apply existing knowledge; the success of any attempt to leverage knowledge assets of a firm is measured by whether a knowledge reuse has occurred and is the culmination of the entire cycle. This movement of knowledge exemplifies the importance of recognizing the knowledge lifecycle. Closer examination of the knowledge lifecycle is needed in the context of different knowledge types to understand each KC.
131
Validating Distinct Knowledge Assets
tacit/implicit/explicit Knowledge Tacit knowledge has engaged researchers for many years and is described in a multitude of ways: practical know-how, difficult to articulate, transferred only via observation and practice, subconsciously understood and applied, and rooted in action, experience and involvement in a specific context (Harigopal and Satyadas, 2001; Koskinen, 2003; Nonaka, 1994; Teece, 1998; Zack, 1999b). Similarly, there is a wealth of research about explicit knowledge depicting its essence as embodied in code or language, knowledge already documented, precisely or formally articulated, codified and communicated in symbolic form and/or natural language (Alavi and Leidner, 2001; Harigopal and Satyadas, 2001; Koskinen, 2003; Zack, 1999b). Knowledge management researchers debate whether knowledge can exist external to human beings and therefore be captured electronically. The research of tacit and explicit knowledge is intrinsic to this debate. A holistic view of organizational knowledge assets must encompass both the tacit and explicit nature of knowledge and the interplay that exists between these two types of knowledge. The connection between tacit and explicit knowledge is apparent when one recognizes that tacit knowledge is the means by which explicit knowledge is created, captured, assimilated, and disseminated (Fahey and Prusak, 1998) and where tacit knowledge forms the background necessary for assigning the structure to develop and interpret explicit knowledge (Alavi and Leidner, 2001; Polanyi, 1975). These connections between explicit and tacit knowledge imply a continuum that provides a scale of media richness vs. externalization: face-to-face (tacit knowledge), telephone, written personal, written formal, numeric formal (explicit knowledge) (Koskinen, 2003). The continuum of tacit to explicit knowledge hints at a process in which tacit knowledge is converted or transformed into explicit knowledge. This movement of knowledge from tacit to explicit
132
is where the domain of implicit knowledge exists. Organizational Learning (OL) literature defines implicit knowledge as that which results from the induction of an abstract representation of the structure that the stimulus environment displays. This knowledge is acquired in the absence of conscious, reflective strategies to learn (Reber, 1989). To place a KM perspective on this definition, the tacit knowledge of experts has unconsciously been made implicit. The transfer of knowledge from being tacit to implicit is unobservable since this process occurs within an individual. Tacit knowledge is “unconsciously” understood and is “difficult to articulate”. Implicit knowledge can be consciously understood and articulated, but has not yet been articulated or captured explicitly. This implicitness enables the possibility of transforming what was originally tacit into explicit. The OL literature has researched implicit learning and provided support that “implicit knowledge can be retained for longer periods than explicit knowledge” (Tunney, 2003). This means that, once tacit knowledge is made implicit, it resides with greater permanence within the human capital and therefore extends the time available for making that knowledge explicit. As the knowledge lifecycle has identified steps in the movement of knowledge, the types of knowledge that are available, useable and transferable must be recognized within each KC. Recognizing the richness of implicit knowledge that exists in each KC can facilitate the construction of venues for transforming implicit knowledge to explicit knowledge that becomes an integral part of an organization’s knowledge assets.
Knowledge capabilities The framework presented here provides a method to assess the overall capability of an organization to manage its knowledge within the four KCs mentioned earlier: Lessons Learned, Knowledge Documents, Expertise and Data. Higher levels of KCs may result in improved organizational per-
Validating Distinct Knowledge Assets
formance. That association is beyond the scope of this paper. We describe each KC in terms of four elements: (1) the recognition of the capability in prior research, (2) the interaction of human capital and technology, (3) the tacit/implicit/explicit nature of the knowledge and (4) the knowledge lifecycle flow. Lessons Learned or best-known methods are defined as situation-specific useful knowledge gained while completing tasks or projects. Each Lesson Learned begins as tacit knowledge in a knowledge worker and at some point it becomes implicit knowledge. Lessons Learned, as internal benchmarking or best practice transfer, are identified as “one of the most common applications (in KM)” (Alavi and Leidner, 2001). Internal benchmarking is the process of identifying, sharing, and using the knowledge inside one’s own organization (O’Dell and Grayson, 1998). Lessons Learned are unique individual aspects of knowledge and their identification as best practices imply that they are highly tacit/implicit, singular and specific to situations. Although lessons may be unique and learned in specific circumstances, one can develop a process to facilitate the identification, capture and transfer of such lessons to other similar situations. One of three basic repository types found by Davenport et al. (1998) to be informal internal knowledge…is sometimes referred to as “lessons learned”. An organization’s human capital can be focused to collect or capture these lessons through reviews conducted at the completion of projects, on achieving major milestones, and/or at periodic intervals such as operational updates. The lessons, captured as successes, failures, solutions, etc., can be organized in a context-specific manner in a repository for reuse. Such knowledge can then be retrieved when the context warrants its use or, if the lessons are generalizable, they may be distributed or used for training other knowledge workers. Knowledge Documents are defined as codified knowledge originating from published sources and containing highly explicit knowledge typically
having a long shelf life. Knowledge Documents may be text-based forms that include project reports, technical reports, research reports and publications. This “field of information (codified knowledge) can include statistics, maps, procedures, analyses…” and can include alternative forms such as pictures, drawings, diagrams, presentations, audio and video clips, on-line manuals, and tutorials (McDermott, 1999). The Cognizant Enterprise Maturity Model identifies “knowledge already documented as a procedure, concept, or theory” as explicit knowledge to be stored in a repository (Harigopal and Satyadas, 2001). While many of these sources of codified knowledge originate internally, “knowledge sources may lie within or outside the firm” (Zack, 1999a). Such knowledge needs to be organized through sorting and categorizing via an established taxonomy into a repository for use within the organization. Search engines are the primary technology used by organization’s knowledge workers to locate and use these knowledge documents. An organization’s human capital must recognize the explicit nature, the internal/external origin, and the referential usage of this knowledge source. The interaction of the human capital with this knowledge source mainly occurs at the search and retrieval stage of the life cycle. Expertise is viewed as the knowledge that may be gained through experience or formal education. The need for domain expertise initiates the demand for transfer of this highly tacit form of knowledge. The Personalization Strategy (Hansen et al., 1999) relies extensively on the identification of experts in various areas of expertise. This strategy views knowledge transfer as occurring through the mentoring done by experts. Identifying such experts and classifying their areas of expertise such that their knowledge can be efficiently tapped is an active research area (Alavi and Leidner, 2001; Dooley et al., 2002). In many organizations, corporate directories have been created to map internal expertise (Alavi and Leidner, 2001) while individually held expertise has been
133
Validating Distinct Knowledge Assets
shown to facilitate team creativity (Tiwana and Mclean, 2005). Most importantly, individuals in organizations often need to access knowledge that is outside their own area of expertise (Dooley et al., 2002) and to do so may require accessing the tacit knowledge in people’s heads (Gurteen, 1998; Quintas et al., 1997). The tools used to facilitate the domain expert’s tacit knowledge transfer range from capturing contact and domain information of experts so other knowledge workers shorten the time to locate and contact known experts to collaboration technologies for simulating faceto-face communication. Experts are also a source of implicit knowledge that has the potential to be made explicit. Hence, some organizations encourage, with supporting technologies, transformation of experts’ implicit knowledge to explicit knowledge for wider dissemination to the organization’s human capital. Data provides many complementary benefits to the leveraging of other KCs. Data can be transformed into decision- and action-relevant meaning. Databases and data warehouses containing aggregated or otherwise summarized historical information are the most basic form of KM tools (Brown and Duguid, 2000; Fahey and Prusak, 1998). The value of this highly explicit form of knowledge is dependent on various dimensions such as context, usefulness, and interpretation (Alavi and Leidner, 2001). A common view holds that data is raw numbers, information is processed data and knowledge is authenticated information (Vance, 1997). The iconoclastic view reverses the data to knowledge assumption and states that knowledge must exist before information can be formulated and before data can be measured to form information (Tuomi, 1999). The raw data that exists in data warehouses has been a significant source of business intelligence (BI) in recent years. Yet BI initiatives have not realized the anticipated benefits to firm performance (Rogers et al., 2005). The capture of an organization’s data into the warehouse is mostly automated through the extract-transform-load process (Hoffer et al.,
134
2005). Major organizational benefits accrue when this knowledge source is utilized through analysis and mining tools to support tactical and strategic decision-making. The focus of organizations on this source of knowledge justifies the inclusion of Data as a KC and the lack of success indicates a need to assess the knowledge capability required to capitalize on it.
MetHodologY The conceptual development of the KCs was done in partnership with a large semiconductor manufacturing firm comprised of multiple business units in a global setting. The research methodology included face validation of the four KC areas derived from prior research, construction of the measurement instrument, identification of the target population, data collection, and analysis for construct validation. This was accomplished in phases over a period of 18 months. The first phase started with the development of KC constructs including the knowledge processes that comprise them. The company charged an internal team of five experts in Process Management, Information Technology, Value Engineering, HR, and Training and Development to work with the external researchers (authors) for the entire project. The authors and the internal team studied several internal knowledge documents, training documents, questionnaires, guidelines, memos, and other artifacts used by knowledge workers. They also studied various KM systems and interviewed knowledge workers in multiple business units regarding their knowledge sharing activities. The aim was to achieve an initial face validation of the KC constructs. From these field interactions, the team developed a KM Capability assessment instrument to measure various aspects of the four KCs described earlier. Each KC was operationalized using a set of latent descriptor factors guided collectively by its unique involvement in the knowledge lifecycle,
Validating Distinct Knowledge Assets
need for technological support, and interaction with the human capital. For example, Lessons Learned is hypothesized to be composed of four descriptor factors: Capture, Repository, Taxonomy, and Application/Use. A focus group of 12 individuals within the subject organization evaluated the first version of the instrument. The group consisted of 10 to 12 senior and mid-level managers to assess the meaning, relevance, and completeness of the instrument (Kulkarni and Freeze, 2004). The feedback from the focus group resulted in clarifying the questions to ensure applicability to the target audience. We then surveyed a pilot group of 98 individuals whose feedback resulted in shortening and substantially simplifying the questionnaire. Another focus group ensured that the meaning of the questions remained intact. Table 1 shows the KC factors (in bold and underlined), the descriptor factors (in bold), and an abbreviated version of the scale items in the questionnaire. The groupings in the table show the hypothesized descriptors for each KC. Each scale item was measured on a 5 point Likert scale denoting the extent to which the item was applicable. The organization’s KM Training and Development group solicited participant organizational groups without the assistance of the researchers. Two large business units were selected to undergo the initial set of assessments. These two unit’s data is analyzed for the purpose of this research. (Eventually, the company used this instrument to assess the KM Capability of many business units; interested readers may request those assessment results from the authors). The first business unit, BU1, was charged with assuring the internal material and product quality across the entire organization (population about 1000). The knowledge workers of BU1 were lab managers, engineers, and technicians. A majority were located in North America with some groups in European and Asian regions. The second unit, BU2, was responsible for the development and sourcing of system software across all product
lines of the organization (population about 700). BU2 knowledge workers were mainly application engineers and software engineers, but also included other enabling groups. BU2 was more globally dispersed. The responsibilities of the two units illustrated substantially different functional areas. The knowledge workers of the two units were charged with tasks belonging to different domains, required distinctly different skill sets and accessed knowledge of different types and forms. Even though the two business units resided within a single organization, we believe the differences in their responsibilities, business goals and objectives provided us with some amount of external validity. Each member of the two business units received an introductory email from senior level sponsors in their business unit concerning the upcoming administration of the survey, its potential impact on KM and the importance of completing the survey. This was followed by a second email with instructions and a link to the survey instrument. A weekly reminder email with a link to the survey was sent for the four-week data collection period. Incentives in the form of raffles were used to boost the participation rates. These efforts resulted in response rates of 22% (223 useable responses) and 43% (303 useable responses) in BU1 and BU2. We note that the responses were voluntary and therefore may have introduced some response bias in the results. To test for this, BU2 allowed us to poll a sample of non-respondents of the survey. Discriminant analysis of this group with the original respondents did not provide evidence of a response bias. BU1 was reluctant to allow us to contact a sampling of the non-respondents; hence, we tested BU1 for a potential response bias by comparing the first and fourth quartile (early and late) responses using discriminant analysis. This analysis was also conducted with BU2 data for comparison. These procedures for BU1 and BU2 provided no evidence of a response bias.
135
Validating Distinct Knowledge Assets
Table 1. Knowledge Management Capability Assessment Instrument Expertise Expertise Repository
Expertise Taxonomy
er1
Availability of repository(ies)
et1
Existence of taxonomy
er2
Accessibility of repository(ies)
et2
Clarity and standardization
er3
Usefulness of repository content
et3
Comprehensiveness
er4
Information about internal & external experts
et4
Extensibility
er5
Search capabilities
er6
Ease of searching
ec1
Routineness of use
er7
Multiple search criteria
ec2
Ease of use
ec3
Access to internal & external experts
ec4
Multiple tool set
Collaboration Tools **
Expert Access/Consulting ea1
Practice of looking for available expertise
ea2
Ease of finding experts
ea3
Embedded in normal work practices
ep1* ep2
Communities of Practice** es1
Participation in SIGs
Expert Profiling & Registration
es2
Encouragement for participation
Existence of a registering & profiling process
es3
Availability of relevant SIGs
Ease to use
es4
Participation on company time
ep3
Allows self-updating
es5
Financial support for participation
ep4
Managed for consistency Lessons Learned Lessons Learned Repository(ies)
Taxonomy
lr1
Availability of repository(ies)
lt1
Existence of taxonomy
lr2
Accessibility of repository(ies)
lt2
Clarity and standardization
lr3
Usefulness of repository content
lt3
Comprehensiveness
lr4
Search & retrieval capabilities
lr5
Ease of searching
lc1
Practice of capture
lr6
Multiple search criteria
lc2
Consolidation and management
lc3*
Individual and group responsibilities
lc4
Existence of a systematic processes
Application/Use la1
Practice of application/use
la2*
Ease of finding relevant lessons
la3
Embedded in normal work practices
Capture
continued on following page
analYSiS and reSultS While evaluating each KC construct, we hypothesized each latent descriptor factor within an individual KC to be a trait (first order or specific factor) for the set of measures. We first conducted Confirmatory Factor Analysis (CFA) to deter-
136
mine the validity of the hypothesized descriptor factors within each KC. After this analysis, we constructed two measurement models for each KC - a Second Order model and a General-Specific model - to perform further confirmatory analysis. These Structural Equation Models provide further
Validating Distinct Knowledge Assets
Table 1. continued Knowledge Documents Knowledge Documents Repository(ies)
Taxonomy**
kr1
Availability of repository(ies)
kt1*
Existence of taxonomy
kr2
Accessibility of repository(ies)
kt2*
Clarity and standardization
kr3
Usefulness of repository content
kt3*
Comprehensiveness
kr4
Access to internal & external documents
kr5
Supports rich formats
ks1
Ease to use
kr6
Clarity of meta-data
ks2
Effectiveness of retrieval system
ks3
Multiple search criteria
Search & Retrieval
Categorization
Reference & Use**
kc1
Existence of a categorization process
kc2
Ease to use
ku1*
Practice of reference/use
kc3
Embedded in normal work practices
ku2*
Ease of finding documents
kc4
Managed to ensure adherence Data Data Repository(ies)
Data Relevance
dr1
Availability of repository(ies)
dv1
Timeliness
dr2
Accessibility of repository(ies)
dv2
Periodicity
dr3
Currency of data
dv3
Completeness
dr4
Level of detail/summarization
dv4
Usefulness of format
dr5
Clarity of meta-data
dv5
Accuracy
ds1*
Ease of use
ds2*
Sufficiency
Decision Support Tools
KC factors are in bold and underlined, descriptor factors are in bold * - Dropped scale item for EFA, ** - Dropped factor for CFA
evidence of the validity of measurement of these knowledge assets.
confirmatory factor analysis Comparisons of Principal Component Analysis (PCA) and Maximum Likelihood (ML) factor analysis indicate that either technique is equally accurate for pattern reproduction (Velicer and Fava, 1998). However, we chose ML over PCA because PCA is designed as a data reduction technique that maximizes the extracted variance, whereas ML is designed to estimate factor loadings for populations that maximize the likelihood of sampling
the observed correlation matrix (Fabrigar et al., 1999; Tabachnick and Fidell, 2001). Through the use of ML, an iterative process was applied to each KC to identify problem scale items for both business units. During the iterative process, we set three goals while performing the CFA for determining the inclusion or exclusion of scale items on a factor. The first goal was to retain a minimum of three variables per factor in order to ensure stability (Velicer and Fava, 1998). For determining the loading criteria, we used the following recommended thresholds: 0.71 – excellent, 0.63 – very good, 0.55 – good and 0.45 – fair (Comrey and Lee, 1992). If a scale
137
Validating Distinct Knowledge Assets
Table 2. Confirmatory Factor Analysis ML Results Variance Explained (%) Knowledge Capability
TLI
Observations
BU1
BU2
BU1
BU2
BU1
BU2
BU1
BU2
Expertise
Repository, Taxonomy, Access, Profiling, Collaboration, CoP’s
85
78
654
614
0.92
0.92
250
301
Lessons Learned
Repository, Taxonomy, Use, Capture
84
80
139
143
0.95
0.95
243
290
Repository, Categorization, Search
88
82
140
140
0.96
0.96
228
283
Repository, Relevance
90
87
97
123
0.97
0.96
224
291
Knowledge Documents Data
Descriptor Factors
Chi – Square (p < .001)
item did not load on the hypothesized factor, it was removed rather than trying to provide a possible alternative explanation for reorienting that item. This parsimony is especially critical when the removal improves the model’s fit indices and facilitates achieving our second goal which was that the removal of any scale item should improve the model’s Chi-Square and Tucker/Lewis Index (TLI) fit indicators. The third goal was to achieve a model that is significant for both business units and can guide the development of the Structural Equation Models for confirmatory factor analysis. In Table 1, latent descriptors and scale items that were dropped after the CFA are marked with an asterisk. The final loadings of the four KCs, Lessons Learned, Knowledge Documents, Expertise, and Data, are presented in Tables 3-6. Summary results of the overall model for each KC of the two business units (BU1 and BU2) are presented in Table 2 – Confirmatory Factor Analysis ML Results. The results are discussed below along with the impact each goal had on the analysis for the individual capability. The Lessons Learned CFA agreed with the initial four hypothesized factors and with the exception of the fourth factor, the first goal of retaining at least three variables per factor was achieved (Table 3). As can be seen from Table 3, most of the scale items had “excellent” loadings on their respective factors. The fourth factor, Ap-
138
plication/Use, had two scale items with loadings greater than 0.80. Since both these scale items had loadings that were much better than the threshold for “excellent”, retaining the Application/Use factor with only two scale items is not deemed to be a major drawback. More importantly, the two improperly loading scale items, lc3 and la2, were from different hypothesized factors (one from Capture and the other from Application/Use). Of the two scale items, lc3 did not load for BU1 and la2 did not load for BU2. Following our second goal, we iteratively removed each scale item (la2, then lc3) which improved both the Chi-Square statistic and TLI upon each removal. Our third goal was achieved with a TLI of 0.95 for both business units. The CFA results of the Expertise KC returned six identifiable factors as was originally hypothesized (Table 4). We achieved our initial goal of retaining at least three variables per factor for all six factors. Of the twenty-seven scale items, only four did not have “excellent” loadings. The only scale item of the four that loaded improperly was ep1. Instead of loading on Expert Profiling & Registration, ep1 had the highest loading on Repository. Removal of this scale item increased the TLI for both business units. Our third goal was achieved with TLIs of 0.92 for each business unit. Further analysis shows that five of the six factors had three scale items with excellent loadings for
Validating Distinct Knowledge Assets
Table 3. Lessons Learned Factors Repository
Taxonomy
BU1
BU2
lr1
0.67
0.70
lr2
0.82
0.86
lr3
0.82
0.80
lr4
0.87
0.92
lr5
0.88
0.86
lr6
0.89
0.86
Capture
BU1
BU2
lt1
0.63
0.64
lt2
0.87
0.87
lt3
0.83
0.91
App/Use
BU1
BU2
lc1
0.67
0.79
lc2
0.89
0.74
lc3
*
*
lc4
0.51
0.49
la1
BU1
BU2
0.83
0.84
la2
*
*
la3
0.84
0.87
each business unit. The sixth factor, Expert Access, had two items with “excellent” loadings for both business units and the third item had either a “very good” (BU1) or a “good” loading (BU2). The Knowledge Documents KC was originally hypothesized to be composed of five factors. However, the five scale items hypothesized for Taxonomy and Reference & Use did not load on a separate/distinct factor (Table 5). For only the factors of Repository, Categorization and Search were we able to achieve our first goal. Pursuant to our second goal, each of the scale items for Taxonomy (kt1, kt2 and kt3) and Reference & Use (ku1 and ku2) were iteratively removed. The TLI improved with each iteration and a more parsimonious three-factor model emerged. Our third goal was achieved with a TLI of 0.96 for each business unit. For each of the remaining three factors, there were at least three scale items with “excellent” loadings. The Data KC (Table 6) was originally hypothesized as having three factors. The descriptor
variable Decision Support Tools and the associated scale items (ds1 and ds2) encountered commonalities greater than one while running the ML and would not complete computation. This factor did not originally have three items and so these scale items, as well as the factor, was iteratively removed, which resulted in convergence to two factors for data with “excellent” loadings for all scale items. The final TLI was 0.97 for BU1 and 0.96 for BU2.
Knowledge capability SeM Models Another approach to confirming the KC factors is to build a Structural Equation Model (SEM) for each capability. For this purpose, we constructed two measurement models, a Second Order model and General-Specific model, to perform further confirmatory analysis of each KC. In the Second Order model, the descriptors within each capability correspond to first order factors and the KC factor is the second order factor. In the case
139
Validating Distinct Knowledge Assets
Table 4. Expertise Factors Repository BU1
BU2
er1
0.64
0.64
er2
0.80
0.85
er3
0.84
0.85
er4
0.85
0.85
er5
0.93
0.90
er6
0.93
0.90
er7
0.92
0.90
Taxonomy BU1
BU2
et1
0.62
0.58
et2
0.77
0.78
et3
0.80
0.80
et4
0.79
0.78
Profiling BU1
Access
BU2
BU1
BU2
ea1
0.84
0.97
ea2
0.69
0.59
ea3
0.84
0.75
Collaboration BU1
BU2
ec1
0.91
0.74
ec2
0.94
0.87
ec3
0.78
0.75
ec4
0.85
0.77
CoP’s BU1
BU2
es1
0.79
0.57
es2
0.88
0.86
es3
0.87
0.80
es4
0.92
0.94
es5
0.83
0.88
ep1
*
*
ep2
0.73
0.76
ep3
0.76
0.81
ep4
0.75
0.81
of the General-Specific model, the descriptors correspond to specific factors and the KC is the general factor. Although these two models are not mathematically equivalent, they provide similar interpretations of the capability under investigation (Chen et al., 2006; Gustafsson and Balke, 1993). The main difference between the two models is that the Second Order model evaluates the influence of the second order factor (e.g., the Expertise KC) on the first order factors (descrip-
140
tors of Expertise), whereas the General-Specific model evaluates the influence of the general factor (KC) directly on the specific (scale) items that comprise it. In both models, the descriptors are considered to be orthogonal. We used LISREL 8.54 for the investigation of all sixteen measurement models (4 KCs * 2 Models * 2 business units). The final results of the measurement models for each capability are summarized in Table 7 – SEM Confirmatory
Validating Distinct Knowledge Assets
Table 5. Knowledge Document Factors Repository BU1
Taxonomy
BU2
Categorization
BU1
BU2
BU2
kt1
*
*
kt2
*
*
kt3
*
*
ku1
*
*
ku2
*
*
0.76
0.74
kr2
0.86
0.80
kr3
0.88
0.79
kr4
0.78
0.80
kr5
0.76
0.77
kr6
0.67
0.73
BU2
kc1
0.58
0.58
kc2
0.78
0.83
kc3
0.80
0.81
kc4
0.81
0.84
BU1
Use BU1
kr1
BU1
Search BU2
ks1
0.85
0.82
ks2
0.86
0.78
ks3
0.86
0.76
Table 6. Data Factors Repository BU1
Relevance BU2
BU1
Search BU1
BU2
ds1
*
*
ds2
*
*
dr1
0.71
0.66
dr2
0.88
0.85
dr3
0.87
0.88
dr4
0.90
0.91
dr5
0.87
0.89
BU2
dv1
0.89
0.88
dv2
0.90
0.91
dv3
0.83
0.85
dv4
0.80
0.85
dv5
0.86
0.88
141
Validating Distinct Knowledge Assets
Table 7. SEM Confirmatory Analysis Results Knowledge Capability
Model Type
Group
N
df
χ2
NNFI
CFI
SRMR
Lessons Learned
Second Order
BU1
223
100
465
0.95
0.96
0.093
Lessons Learned
Second Order
BU2
303
100
629
0.94
0.95
0.120
Lessons Learned
General-Specific
BU1
223
88
355
0.96
0.97
0.069
Lessons Learned
General-Specific
BU2
303
88
394
0.96
0.97
0.093
Data
Second Order
BU1
223
51
183
0.98
0.98
0.032
Data
Second Order
BU2
303
51
198
0.98
0.99
0.034
Data
General-Specific
BU1
223
43
136
0.98
0.99
0.048
Data
General-Specific
BU2
303
43
125
0.99
0.99
0.036
Expertise
Second Order
BU1
223
131
578
0.96
0.97
0.076
Expertise
Second Order
BU2
303
131
477
0.97
0.98
0.066
Expertise
General-Specific
BU1
223
117
461
0.97
0.97
0.034
Expertise
General-Specific
BU2
303
117
391
0.98
0.98
0.030
Knowledge Documents
Second Order
BU1
223
62
263
0.97
0.98
0.052
Knowledge Documents
Second Order
BU2
303
62
241
0.98
0.98
0.043
Knowledge Documents
General-Specific
BU1
223
52
193
0.97
0.98
0.030
Knowledge Documents
General-Specific
BU2
303
52
201
0.98
0.99
0.024
Analysis Results. Path diagrams for the Lessons Learned KC representing the two measurement models, Second Order and General-Specific, for BU1 are in Figure 1a and 1b. Figures 2 through 4 show the Second Order and General-Specific measurement models for the other three KCs Expertise, Knowledge Documents, and Data - for BU1. Similar figures for BU2 are omitted due to space constraints. Only the summarized results appear in Table 7. For confirming the factors within each capability+, we began with a model that replicated the initial instrument and included all items and their loadings on the hypothesized descriptors. This confirmation ignores the results of the earlier CFA in order to test the adequacy of the initially hypothesized factors within each KC. If adequate Second Order and General-Specific models are achieved, these are considered the final models. We achieved significant results for this initial confirmation in the capabilities of Lessons Learned (Figure 1a and 1b) and Data (Figure 4a and 4b). For Knowledge Documents (Figure 3a and 3b),
142
the five hypothesized factors would not converge using either model. We then used the CFA results to run a three-factor model which achieved a good model fit. Expertise encountered similar problems with convergence which prompted us to reevaluate its descriptor factors in light of the other KC descriptors which were mainly knowledge process oriented. We concluded that although Collaboration Tools and Special Interest Groups were confirmed to be latent factors within the capability of Expertise in the original CFA, these factors did not represent any particular stage in the lifecycle view of knowledge capabilities and thus may in reality represent a different capability. These descriptors need further investigation. We removed these two factors and achieved convergence on a four-factor model for Expertise (see Figure 2a and 2b). The results in Table 7 represent these final measurement models. The fit indices in Table 7 provide four tests indicating the adequacy of fit for each KC factor. The overall KC is represented as a Second Order factor and a General factor. These two represen-
Validating Distinct Knowledge Assets
Figure 1. BU1 Lessons Learned Measurement Models
Figure 2. BU1 Expertise Measurement Models
143
Validating Distinct Knowledge Assets
Figure 3. BU1 Knowledge Documents Measurement Models
Figure 4. Data Measurement Models
tations were each replicated, with similar goodness of fit, in the two business units. NNFI and CFI above a threshold of 0.90 are considered to indicate a good fit for the model. An SRMR below the threshold of 0.08 is considered a good fit for the model. As can be seen from Table 7, all models for each business unit represent a good fit and thus validate the KC constructs. The following analysis and comparisons of the Second Order
144
and General-Specific model outputs reference only the Lessons Learned KC (Figure 1a and 1b). Similar comparisons can be made for each of the other KCs (Figures 2 through 4). Analysis of BU2’s results also provided similar results. The details and diagrams may be requested from the authors. Reviewing the Second Order model (Figure 1a), we see that for each of the descriptors (Re-
Validating Distinct Knowledge Assets
pository, Taxonomy, Capture, and Application/ Use), the loadings for each scale item are “excellent” based on the stated loading criteria of 0.71 or better. This provides the requisite construct validation (in addition to the original CFA), i.e. the instrument measures the intended separate latent concepts. Each path between the first order descriptor factors and the second order KC factor (Lessons Learned in this case) is significant indicating the influence of the overall KC on each of its first order descriptor factors. When reviewing the General-Specific model (Figure 1b), each scale item is posited to load on both the general factor (KC) and one of the specific factors (descriptor). A General-Specific model does not follow the previously stated loading criterion; instead, the significance of the loading coefficients has greater meaning since each scale item is hypothesized to load on both a general and a specific factor. For Lessons Learned, the loadings on the general factor (Lessons Learned) are all significant which indicates the influence of the general factor on all scale items. The scale item loadings for the specific factors (Repository, Taxonomy, Capture and Application/Use) are also significant even though these loadings vary from a low of 0.18 to a high of 0.71. These loadings represent the additional influence and explanation of variance that the specific factors have on each of the scale items, above and beyond the influence of the general factor (Chen et al., 2004). The fact that all items are also significant on the specific factors provides additional evidence of construct validity of the measurement model. Since the Second Order model is a more restricted model and has been demonstrated to be a nested version of the General-Specific model (Yung et al., 1999), the Chi-Square difference test indicates whether the two models are significantly different in their representation of the KC. The form of the Chi-Square difference test statistic and its value for the Lessons Learned KC is: χ2Δ = (χ22nd – χ2GS) = (465 – 355) = 110,
df Δ = (df2nd – dfGS) = (100 – 88) = 12, where: χ2Δ is the difference between the ChiSquared statistics and df Δ is the difference between the degrees of freedom of the Second Order model and the General-Specific model respectively. Significance of the Chi-Square test is determined by consulting a Chi-Square table utilizing the resulting Chi-Square value and the number of degrees of freedom. The Chi-Square test results for BU1 for the Lessons Learned KC indicate that the two models are significantly different (at p<.001). A review of the fit indices indicates that, although both models indicate the existence of Lessons Learned as a valid KC, the GeneralSpecific model represents the Lessons Learned capability more accurately than the Second Order model. This means that, for any investigations involving Lessons Learned as an overall factor (e.g. relationships between Lessons Learned capability and firm performance) the General-Specific model would be a better choice for representing this KC. On the other hand, if the theory indicates descriptor variables within Lessons Learned, then the Second Order model provides an adequate representation for testing the hypotheses. The Chi-Square difference test and comparison of model fit indices provided similar results for each capability for both business units. This indicates that the structure of the KCs and the descriptor variables are consistent across all the capabilities and that both models may be used depending on the theoretical basis from which they are applied.
concluSion In the process of establishing capabilities as knowledge assets, we have focused our efforts on establishing measurement consistency and the representation of each knowledge capability as a latent factor. Each capability was established using the two measurement model forms of: (1) a General-Specific SEM model and (2) a Second-
145
Validating Distinct Knowledge Assets
Order SEM model. Both models provided fit indices for all capabilities indicating models of good fit. The significance of the General factor and the Second Order factor representing the overall capability provides strong evidence supporting these knowledge assets as measurable capabilities. This evidence is further strengthened by the application of the models to two large independent business units in a leading semi-conductor manufacturing company in order to confirm the measurability of the capabilities as knowledge assets. By using two measurement models within two business units, we have provided experimental rigor and some amount of external validity. While we have demonstrated the standardized measurability and recognized a different makeup for each knowledge asset, we recognize that these results may be limited by the fact that the data originated from a single organization. This limitation needs to be evaluated in light of the vastly different corporate directives and the autonomous nature of the two business units. One must also recognize that while the identification of four capabilities represents an attempt at enumerating diverse knowledge assets within most organizations, these KCs may not represent all that is considered as knowledge by every organization. KM is an evolving field and the definition of what is knowledge can undergo changes as researchers and practitioners develop a better understanding of this complex concept. Immediate implication for managers is that a method has been provided to assess the capability level for these four knowledge assets. Recognition that a specific knowledge capability is low in comparison to other knowledge capabilities and organizational strategic goals will allow KM initiatives to be targeted to that specific KC. For example, an organization may recognize a need for contacting experts and using relevant expertise as a knowledge asset. The Expertise KC may not be currently understood or exploited as well as their documented knowledge that is systematically maintained and shared widely across the organi-
146
zation (Hansen et al., 1999). If this deficiency is identified and the need recognized, an organization may focus on improving the sharing of relevant expertise to complement the existing high capability in Knowledge Documents. Conversely, another organization may identify a higher need for effectively reusing the knowledge asset of Lessons Learned, but a lower need for directly sharing Expertise. The organization’s business strategy may determine such differences in emphases for its use of knowledge and the measurement can identify the relative strength of each capability. Organizational decision making can then be organized around the knowledge asset most beneficial to the organization’s strategy. Assessment at the individual descriptor level allows management to focus on targeted improvements within the specific knowledge area. Recognizing the appropriateness of an organization’s knowledge asset capabilities with respect to its business goals will assist in directing resources to initiatives that provide the greatest return. This delineation of knowledge assets has not been achieved in prior literature and represents an important improvement in the ability to target specific improvements in organizational KM Capability. Potential business implications of measuring knowledge asset capabilities are in the ability to tie KM to recognized value metrics, construct targeted knowledge sharing improvements and match organizational/business unit goals to the need for capabilities in specific knowledge areas. Outcomes that knowledge assets causally affect may range from such soft measures as user satisfaction, perceived usefulness of knowledge, and decision effectiveness, to directly measurable unit-level or firm-level goal-oriented performance metrics such as cycle-times, productivity, and customer satisfaction. Current research is underway to investigate how knowledge capabilities relate to some of these metrics. As an initial avenue of future research, the nature of the interaction between knowledge and human capital implies a potential influence of the
Validating Distinct Knowledge Assets
organization’s culture with respect to KM and knowledge sharing. While the capability measurement models are considered adequate without taking a cultural metric into account, an organization’s culture may influence a causal relationship to the value achieved from these knowledge asset. Factors such as the leadership’s commitment to knowledge sharing, rewards and incentive systems for promoting knowledge sharing behavior, attitudes of co-workers, and importance placed on training while introducing new KM initiatives are all important aspects to be considered while investigating the causal relationships of KCs to value indicating metrics of an organization. The implications for research are significant. A standardized instrument for measuring knowledge capabilities would not only allow benchmarking, but also allow tracking capabilities over time and linking them to those performance metrics that are deemed appropriate by the organization. The application of this measurement instrument to multiple organizations will improve the external validation of the KC measurement models and the identification of knowledge assets across organizations. Within the current organization, KM improvements have been initiated and a longitudinal study is in progress to validate the predictive ability of the instrument and the level of impact of each KC to proposed value metrics.
referenceS Alavi, M., & Leidner, D. (2001). Review: Knowledge Management and Knowledge Management Systems: Conceptual Foundations and Research Issues. Management Information Systems Quarterly, 25(1), 107–136. doi:10.2307/3250961 Barney, J. (1991). Firm Resources and Sustained Competitive Advantage. Journal of Management, 17(1), 99–120. doi:10.1177/014920639101700108
Birkinshaw, J., & Sheehan, T. (2002). Managing the Knowledge Life Cycle. MIT Sloan Management Review, 44(1), 75–83. Brown, J. S., & Duguid, P. (2000). Balancing act: How to Capture Knowledge without Killing It. Harvard Business Review, 78(3). Carlsson, S. A., & Sawy, O. A. El, Eriksson, Inger and Raven, Arjan. (1996). Gaining Competitive Advantage Through Shared Knowledge Creation: In Search of a New Design Theory for Strategic Information Systems. In Proceedings of the Fourth European Conference on Information Systems. Lisbon, Portugal. (pp. 1067-1076). Chen, F. F., West, S. G., & Sousa, K. H. (2006). A Comparison of Bifactor and Second-Order Models of Quality of Life. Multivariate Behavioral Research, 41(2), 189–225. doi:10.1207/ s15327906mbr4102_5 Coff, R. W. (1997). Human Assets and Management Dilemmas: Coping with Hazards on the Road to Resource-Based Theory. Academy of Management Review, 22(2), 374–402. doi:10.2307/259327 Comrey, A. L., & Lee, H. B. (1992). A First Course in Factor Analysis (2nd ed.). Hillsdale, NJ: Lawrence Erlbaum Associates. Davenport, T. H., De Long, D. W., & Beers, M. C. (1998). Successful Knowledge Management Projects. Sloan Management Review, 39(2), 43–57. Davenport, T. H., Thomas, R. J., & Cantrell, S. (2002). The mysterious art and science of knowledge-worker performance. Sloan Management Review, 44, 23–30. Dooley, K. J., Corman, S. R., & McPhee, R. D. (2002). A Knowledge Directory for Identifying Experts and Areas of Expertise. Human Systems Management, 21, 217–228.
147
Validating Distinct Knowledge Assets
Fabrigar, L. R., Wegener, D. T., MacCallum, R. C., & Strahan, E. J. (1999). Evaluating the Use of Exploratory Factor Analysis in Psychological Research. Psychological Methods, 4(3), 272–299. doi:10.1037/1082-989X.4.3.272 Fahey, L., & Prusak, L. (1998). The Eleven Deadliest Sins of Knowledge Management. California Management Review, 40(3), 265–276. Freeze, R., & Kulkarni, U. (2007). Knowledge Management Capability: Defining Knowledge Assets. Journal of Knowledge Management, 11(6). doi:10.1108/13673270710832190 Gold, A. H., Malhotra, A., & Segars, A. H. (2001). Knowledge Management: An Organizational Capabilities Perspective. Journal of Management Information Systems, 18(1), 185–214. Gurteen, D. (1998). Knowledge, Creativity and Innovation. Journal of Knowledge Management, 2(1), 5–13. doi:10.1108/13673279810800744 Gustafsson, J., & Balke, G. (1993). General and Specific Abilities as Predictors of School Achievement. Multivariate Behavioral Research, 28, 407–434. doi:10.1207/s15327906mbr2804_2 Hansen, M. T., Nohria, N., & Tierney, T. (1999). What’s Your Strategy for Managing Knowledge? Harvard Business Review, 77(2), 106–116. Harigopal, U., & Satyadas, A. (2001). Cognizant Enterprise Maturity Model (CEMM). IEEE Transactions on Systems, Man and Cybernetics. Part C, Applications and Reviews, 31(4), 449–459. doi:10.1109/5326.983928 Hoffer, J. A., Prescott, M. B., & McFadden, R. R. (2005). Modern Database Management (7th ed.). Upper Saddle River, NJ: Pearson Prentice Hall. King, W. R., Marks, P. V., & McCoy, S. (2002). The Most Important Issues in Knowledge Management. Communications of the ACM, 45(9), 93–97. doi:10.1145/567498.567505
148
Koskinen, K. U. (2003). Evaluation of Tacit Knowledge Utilization in Work Units. Journal of Knowledge Management, 7(5), 67–81. doi:10.1108/13673270310505395 Kulkarni, U., & Freeze, R. D. (2004). Development and Validation of a Knowledge Management Capability Assessment Model. In Proceedings on the Twenty-Fifth International Conference on Information Systems, Washington DC McDermott, R. (1999). Why Information Technology Inspired but Cannot Deliver Knowledge Management. California Management Review, 41(4). Melville, N., Kraemer, K., & Gurbaxani, V. (2004). Review: Information Technology and Organizational Performance: An Integrative Model of IT Business Value. Management Information Systems Quarterly, 28(2), 283–322. Nonaka, I. (1994). A Dynamic Theory of Organizational Knowledge Creation. Organization Science, 5(1), 14–37. doi:10.1287/orsc.5.1.14 O’Dell, C., & Grayson, C. J. (1998). If only we knew what we know: Identification and transfer of internal best practices. California Management Review, 40(3), 154–174. Polanyi, M. (1975). Personal Knowledge. Chicago: University of Chicago Press. Quintas, P., Lefrere, P., & Jones, G. (1997). Knowledge Management: a Strategic Agenda” in Long Range Planning. London: Elsevier Science Ltd. 385-391. Reber, A. S. (1989). Implicit Learning and Tacit Knowledge. Journal of Experimental Psychology. General, 118(3), 219–235. doi:10.1037/00963445.118.3.219 Rogers, S. B., McDonald, K. D., & Brown, V. A. (2005). CFOs Positioned to Drive BI Integration. Financial Executive, 21(7), 46.
Validating Distinct Knowledge Assets
Santhanam, R., & Hartono, E. (2003). Issues in Linking Information Technology Capability to Firm Performance. Management Information Systems Quarterly, 27(1), 125–153. Satyadas, A., Harigopal, U., & Cassaigne, N. P. (2001). Knowledge Management Tutorial: An Editorial Overview. IEEE Transactions on Systems, Man and Cybernetics. Part C, Applications and Reviews, 31(4), 429–437. doi:10.1109/5326.983926 Tabachnick, B. G., & Fidell, L. S. (2001). Using Multivariate Statistics (4th ed.). New York: Allyn and Bacon. Tanriverdi, H. (2005, June). Information technology relatedness, knowledge management capability, and performance of multibusiness firms. Management Information Systems Quarterly, 29(2), 311–334. Teece, D. J. (1998). Capturing value from knowledge assets: The New Economy, Markets for Know-How, And Intangible Assets. California Management Review, 40(3). Tiwana, A., & Mclean, E. R. (2005). Expertise Integration and Creativity in Information Systems Development. Journal of Management Information Systems, 22(1), 13–43. Tunney, R. J. (2003). Implicit and Explicit Knowledge Decay at Different Rates: A Dissociation Between Priming And Recognition in Artificial Grammar Learning. Experimental Psychology, 50(2), 124–130. doi:10.1026//16183169.50.2.124
Vance, D. M. (1997). Information, Knowledge and Wisdom: The Epistemic Hierarchy and ComputerBased Information System” in Americas Conference on Information Systems. Indianapolis. In Proceedings of the Third Americas Conference on Information Systems. Velicer, W. F., & Fava, J. L. (1998). Effects of variable and subject sampling on factor pattern recovery. Psychological Methods, 3(2), 231–251. doi:10.1037/1082-989X.3.2.231 Wade, M., & Hulland, J. (2004). Review: The Resource-Based View and Information Systems Research: Review, Extension, and Suggestions for Future Research. Management Information Systems Quarterly, 28(1), 107–142. Yung, Y. F., Thissen, D., & McLeod, L. D. (1999). On the Relationship Between the Higher-Order Factor Model and the Hierarchical Factor Model. Psychometrika, 64(2), 113–128. doi:10.1007/ BF02294531 Zack, M. H. (1999a). Developing a Knowledge Strategy. California Management Review, 41(3), 125–145. Zack, M. H. (1999b). Managing Codified Knowledge. Sloan Management Review, 40(4). Zhu, K. (2004). The complementarity of information technology infrastructure and e-commerce capability: A resource-based assessment of their business value. Journal of Management Information Systems, 21(1), 167–202.
Tuomi, I. (1999). Data is More than Knowledge: Implications of the Reversed Hierarchy for Knowledge Management and Organizational Memory. In Proceedings of the Thirty-Second Hawaii International Conference on Systems Sciences. Hawaii, IEEE Computer Society Press, Los Alamitos, CA.
149
150
Chapter 9
Assessing Knowledge Management:
Refining and Cross-Validating the Knowledge Management Index (KMI) using Structural Equation Modeling (SEM) Techniques Derek Ajesam Asoh Southern Illinois University Carbondale, USA & National Polytechnic, University of Yaounde, Cameroon Salvatore Belardo University at Albany, USA Jakov (Yasha) Crnkovic University at Albany, USA
abStract With growing interest in KM-related assessments and calls for rigorous assessment tools, the objective of this study was to apply SEM techniques to refine and cross-validate the KMI, a metric to assess the degree to which organizations are engaged in knowledge management (KM). Unlike previous KM metrics research that has focused on scales, we modeled the KMI as a formative latent variable, thereby extending knowledge on formative measures and index creation from other fields into the KM field. The refined KMI metric was tested in a nomological network and found to be robust and stable when cross-validated; thereby demonstrating consistent prediction results across independent data sets. The study also verified the hypothesis that the KMI is positively correlated with organizational performance (OP). Research contributions, managerial implications, limitations of the study, and direction for further research are discussed. DOI: 10.4018/978-1-60566-709-6.ch009
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
Assessing Knowledge Management
introduction The knowledge-based view of the firm (Grant, 1996) and the resource-based view of knowledge (Barney, 1991, 2001) have contributed to an understanding and recognition of knowledge as a unique resource that enables organizations to attain and maintain sustainable competitive advantage. The recognition of the value of knowledge has propelled many organizations to become more committed to managing their knowledge assets. As a result, knowledge management (KM) has evolved to become a prevalent if not mandatory practice in such organizations; where expectations are high, of KM to positively and significantly contribute to bottom-line results and consequently overall organizational performance (OP). Yet, despite massive investment in KM per se, many organizations are still struggling to assess and tie KM to such outcomes as improved performance (Chan & Chau, 2005). KM benefits are intangible in nature and assessing the performance impact of KM may be one of the greatest challenges confronting organizations that have embarked on KM. But the results of the assessment are quite invaluable to be ignored even if organizations have to face various challenges or other deterring factors to the assessment process. For example, assessing KM permits organizations to identify and possibly eliminate gaps in knowledge preparedness, set realistic expectations of KM benefits, and appreciate how such benefits relate with OP. As organizations approach KM as a means to improve their performance, being able to assess the degree of engagement in KM remains an important task that must be conducted with a reasonable degree of accuracy if they hope to use the results for better management decision making regarding the allocation and deployment of resources to meet performance goals through KM. The knowledge management index (KMI) has been proposed as a metric to measure the degree to which an organization is engaged in KM
(Asoh, Belardo, & Crnkovic, 2002). Although the usefulness of the KMI as a proxy for KM and predictor of OP has been reported in previous studies (Asoh, Belardo, & Crnkovic, 2004; Crnkovic, Belardo, & Asoh, 2004), the KMI has a number of limitations which this study addresses in order to enhance its usefulness. First, in its current form with 32 items measuring one construct, the KMI instrument may be perceived as being lengthy and as such would be of limited usefulness (Comer, Machleit, & Lagace, 1989). Second, the KMI is conceptualized and measured as a mean score of responses to questionnaire items. Although summing item responses (Spector, 1992) or creating item parcels (Little, Cunningham, Shahar et al., 2002), is acceptable practice, some researchers have stressed the necessity of first establishing that the items summed or used in the parcel are unidimensional (Gerbing & Anderson, 1988; Kim & Hagtvet, 2003). The unidimensionality of the KMI has not been investigated. Third, we believe that the KMI is multi-dimensional because KM itself is a multi-faceted organizational phenomenon. Without establishing the unidimensionality of the KMI items, attempts at validating the KMI where groups of items are simply summed and used in regression equations as in the recent study (Asoh, Belardo, & Crnkovic, 2005) may produce results that are not stable. Fourth, the KMI model has not been cross-validated. Researchers have been urged to validate empirical research as a means of maintaining rigor in the field (Straub, Boudreau, & Gefen, 2004). However, cross-validation is often ignored. Cross-validation, which involves the use of a second sample to test a theory/model that has been developed using a first sample, avoids circular reasoning in research when the theory/model is developed and tested using only one sample as is often the case in validation only studies. Furthermore, cross-validation is essential to evaluating the accuracy of a model and is important because it demonstrates the model can “generate consistent results, and will thus be
151
Assessing Knowledge Management
of practical value in making predictions among members of the reference population upon which the model is based” (Sheskin, 2004) (p. 1002). Given the interest in KM-related assessments (Anantatmula, 2005; Kankanhalli & Tan, 2005), the relevance of the KMI as a potential KM assessment metric (Asoh et al., 2004; Crnkovic et al., 2004), and the calls for rigorous assessment tools (Straub et al., 2004), the purpose and objectives of this study were to (1) refine and reduce the number of items of the KMI instrument; (2) conceptualize and investigate the KMI model as a multi-dimensional formative latent variable (LV) construct, (3) investigate the performance impact of KM via the KMI, and (4) investigate the stability of the refined KMI model across two independent datasets of the same population through cross-validation: developing a KMI nomological network model using a first sample (calibration) and testing it using a second sample (holdback/validation). The rest of this paper is organized as follows. The second section presents the theoretical background, including the rationale for KM assessment, and highlight of current work on KM metrics, scope of the KMI, definition of the elements of the KMI model as well as a multi-dimensional LV model of the KMI. The third section presents the research models and hypotheses of the study. We present the methodology of the study in section four, discuss the results in section five, and conclude in section six with a discussion of the contribution, management and academic implications, limitations, and directions for further research.
tHeoretical bacKground KM assessment requirements, benefits, and Metrics To assess KM requires the use of metrics, i.e. measures that yield key data about KM. Such
152
data can be translated into actionable information for managerial decision making processes. Unfortunately, as (Lee, Lee, & Kang, 2005) point out, many organizations that have introduced KM have yet to develop appropriate means of assessing KM effectiveness and usefulness. While some organizations currently have a few or no metrics to assess their commitment to, and their engagement in KM, it is precisely such assessment metrics, we contend, that should be developed, validated, and applied within organizational settings prior to attempting to associate KM benefits with OP. Executives and managers often look at metrics to understand past, current, and possible future business scenarios before making decisions and as such, KM assessment should be a routine or mandatory activity in organizations. Other benefits of KM assessment can be cited. For example, assessment of the performance impact of KM can provide the basis for the development and deployment of knowledge resources in the best possible way to maximize OP. Furthermore, by assessing KM and OP, key objectives such as evaluating investment, securing funding for KM, developing benchmarks, and deriving lessons for the future can be realized (Kankanhalli & Tan, 2005). The usefulness of KM metrics extends beyond the focus of individual organizations. Metrics permit comparability with other organizations, across industries, time, and geographical regions. Metrics also permit comparability of research and provide a basis for validation of theories and relationships among concepts, as well as facilitate replication of research without the need to reinvent the wheel by developing new instruments (Boudreau, Gefen, & Straub, 2001). Despite the necessity and usefulness of metrics, research on the development of KM assessment metrics has remained largely underdeveloped (Grover & Davenport, 2001). Equally, the practice of KM assessment has remained underdeveloped (Bontis, 2001). These underdevelopments arise not only as a result of the complexity of assessing organizational initiatives but also because
Assessing Knowledge Management
KM is multi-facet, and knowledge has multiple interpretations (Alavi & Leidner, 2001). As KM is multi-faceted, it is has multiple values (Kankanhalli & Tan, 2005) and is understood and assessed in a variety of ways (Shin, 2004). For example, the Balanced Scorecard, Economic Value Added, Intangible Asset Monitor, Market Value Added, Technology Broker, and Skandia Navigator have been discussed as different KM assessment metrics employed by various organizations (Bontis, 2001; Bose, 2004). In a more recent and structured review of KM assessment metrics, five main areas of KM research metrics (user of KM and KM system, KM System, KM Process, KM Initiative, and Organization as a whole) and eight areas of KM practice metrics (House of Quality, Balanced Scorecard, Process Classification Framework, Skandia Navigator, Intellectual Capital Index, Intangible Asset Monitor, Economic Value Added, and Tobin Q) were identified (Kankanhalli & Tan, 2005). We refer the reader to the details on these KM metrics in the works cited above. This study opines on, extends, and crossvalidates results of previous studies (Asoh et al., 2004; Crnkovic et al., 2004) that focused on the KM process area within the research stream of KM metrics. In addition, rather than developing scales as in previous KM process metric research (Darroch, 2003; Lee et al., 2005), this study extends the above previous research by conceptualizing the KMI as a formative index (Diamantopoulos & Winklhofer, 2001); applying the SEM analytical technique as a means of confronting the a priori theory of a relationship between KM and OP with empirical data (Fornell, 1982); and crossvalidating the KMI model. Furthermore, previous KM Process metric research has focused only on processes, without taking into account the management of the KM processes or consideration of key factors that influence or enable the success of KM efforts, referred to as critical success factors (CSF) (Asoh et al., 2002). For example, Darroch (2003) developed
scales to measure three KM behaviors and practices in organizations which included knowledge acquisition, knowledge dissemination, and knowledge responsiveness or use. Although these scales could be useful in identifying knowledge gaps in organizations, they do not take into consideration factors that influence or enable the knowledge behaviors and practices, i.e. KM CSFs. Equally, Lee et al., (2005) constructed a scale used in conjunction with an analytical measure they call the knowledge management performance index (KMPI). The KMPI focuses primarily on five KM processes: knowledge creation, accumulation, sharing, utilization, and internalization. While the KMPI can be used to measure the quality of organizational knowledge, just as in the case of the study by Darroch (2003), the approach by Lee et al., (2005) neither takes into account the management of the five KM processes nor considers factors critical to the success of KM efforts, i.e. KM CSFs. But according to Asoh et al., (2002), to properly assess KM efforts organizations must not only consider KMPs but also KM CSFs. Their underlying assumption, to which we subscribe, is that “in every organization, there is persistent interaction between knowledge management processes under the influence of critical success factors, orchestrated by some actors: employees, customers, partners, and the environment of the organization [in pursuit and support of organizational performance objectives].” (p.26). One implication of the foregoing assumption is that it is possible to cast and investigate an organization’s engagement in KM, as depicted by the KMI, within a common frame of KMPs and CSFs.
Scope of the Knowledge Management index (KMi) The level of commitment to and engagement in KM and, subsequently, the benefits that may accrue from managing knowledge, vary widely from one organization to another because of the multi-
153
Assessing Knowledge Management
Table 1. Belardo’s Matrix: The reference base of the KMI model Knowledge Management Process (KMPs) Identification Critical Success Factors (CSFs)
Elicitation
Dissemination
Utilization
Technology Leadership Culture Measurement
faceted nature of knowledge and KM. Although any metric designed to assess organizational KM efforts should take into consideration as many facets as possible, it is important to develop a simple and parsimonious model with a manageable number of variables. Common denominators in any such metric must include processes that are common to KM (KMPs) and enablers or factors critical to the success of KM (CSFs). Drawing on research within the areas of KMPs and/or CSFs (Arthur Andersen, 1996; O’Dell, Wiig, & Odem, 1999), the KMI was proposed as a metric to assess the degree of an organization’s engagement in KM within a common frame of four key KMPs (knowledge identification, elicitation, dissemination, and utilization) and four key CSFs (technology, leadership, culture, and measurement) (Asoh et al., 2002). For practical purposes, the KMI is framed within an evaluation matrix that relates the four key KMPs with four key CSFs. This matrix, referred to as Belardo’s Matrix, forms the reference base of the KMI model, and is depicted in Table 1 (Asoh et al., 2002). The choice of the elements constituting KMPs and CSFs was influenced by the works cited above and practical experience in teaching KM in an accredited MBA Program at a Northeastern US university over many years.
154
Knowledge Management Processes (KMPs) Different authors have attributed different names to the same KMPs. We maintain that a parsimonious yet relevant and informative model consists of the following stages: identification, elicitation, dissemination, and utilization (Asoh et al., 2002). These stages, we contend, effectively capture the various discussions and practices relevant to most KM programs. Knowledge identification focuses on discerning the location and value of knowledge, the roles and expertise of individual employees, constraints to knowledge flow, and opportunities to leverage knowledge. Knowledge elicitation focuses on “extracting” knowledge from relevant sources to meet the goals of the organization or to enhance the organization’s knowledge management system. Knowledge dissemination focuses on distributing knowledge to organizational members, i.e., ensuring that those who have knowledge share it with those who do not have it. Knowledge utilization is defined as the application of knowledge for the attainment of organizational goals. With knowledge recognized as a key organizational resource, the capture (identification and elicitation), sharing (dissemination) and application (utilization) of knowledge become fundamental in creating, maintaining, and sustaining an organization’s competitive advantage (Grant, 1996; von Krogh, 1998).
Assessing Knowledge Management
Knowledge Management critical Success factors (cSfs) The CSFs identified in Belardo’s Matrix (Table 1) -- technology, leadership, culture, and measurement - are closely connected to organizational KM efforts, as explained below. Technology is defined as any machine-based mechanism that provides the foundation for solutions that automate and inform various organizational processes. In the context of KM, information technology (IT) is the main focus. IT has evolved from being a driver of KM to become a CSF in KM (Tsui, 2005). IT is critical because it ensures the creation of KM systems, the principal purpose of which is to add value to KM (Quaduss & Xu, 2005). Furthermore, IT is instrumental in the acquisition and dissemination of knowledge (Sher & Lee, 2004) and supports knowledge application and use by embedding knowledge in organizational routines (Alavi & Leidner, 2001). Common IT tools (e.g., groupware, e-mail, bulletin boards, online databases, intranets, data warehouses, software agents, search engines, retrieval and classification tools, e-collaboration tools, portals, and content management systems) facilitate KMPs (Hendriks & Vriens, 1999; Tsui, 2005) and enable the storage and sharing of organizational knowledge (Davenport, De Long, & Beers, 1998; Hansen, Nohira, & Tierney, 2001). The importance of IT to KM can be further appreciated when IT is viewed as a tool that enables the transfer of knowledge among organizational members across time and space. It is hoped that organizational members, thus enabled, can internalize knowledge and apply it toward organizational goals (Zyngier, 2003). Leadership is defined as the support provided for KM activities by management. Such support is critical to effective KM initiatives, particularly when it comes to direction and evaluation (Brown & Woodland, 1999; April, 2002). An APQC study of the World Bank, found that its success in the area
of KM could be directly attributed to the efforts of top management, which was instrumental in “removing barriers by making learning a priority and eliminating the negative impacts of sharing [knowledge].” (O’Dell, 2004) (p. 6). Contrary to the situation at the World Bank, the inability of top management to champion and stimulate KMPs has been deemed responsible for the failure of well-intentioned organizational KM efforts and subsequent decline in OP (Chan & Chau, 2005). Culture is broadly defined as embodying “people issues” and is reflected in values, norms, and practices (De Long & Fahey, 2000; Schein, 2001). It influences individual motivation for learning (Amabile, 1997) which, in turn, shapes knowledge-related norms within an organization. Culture presents the greatest challenge to KM efforts since it can simultaneously act as a facilitator and an inhibitor of success (Ruggles, 1998) and presents a major barrier to creating and leveraging knowledge (De Long & Fahey, 2000). Measurement is defined as the continuous assessment of how well KM is proceeding within the organization. Measurement focuses on evaluating the appropriateness of knowledge, and knowledgerelated activities in relation to the needs of the organization. Linking KM to performance results helps make the business case for managing knowledge. Measurement provides indicators that can help the organization align KM with business strategy and allocate resources to increase the organizational knowledge base (O’Dell, 2004).
the KMi dimensions and latent Variable Model To model the KMI in this study, we maintained the initial theoretical framework of Belardo’s Matrix (Table 1) with the same two questionnaire items for each of the sixteen cells as proposed by Asoh et al., (2002). However, we contend that KM is multifaceted and instead of considering the KMI as a mean score of the 32 questionnaire items, we conceived the KMI as an unobservable
155
Assessing Knowledge Management
Figure 1. The KMI model (LV Model Type II, with 32 indicators)
LV or construct associated with the 32 questionnaire items. In the absence of appropriate previous LV measurement models, it is necessary to define a preliminary model of the dimensions of a construct, since this enhances our ability to understand and interpret empirical results (Sethi & King, 1991; MacKenzie, 2003). By definition, the four KMPs (identification, elicitation, dissemination, and utilization) are distinct concepts, each of which is a first-order construct that can be cast across the frame of the CSFs to take into account the impact of the CSFs on the specific KMPs. Defining the KMI as the degree to which an organization is engaged in KM means it is the combination or aggregate of various organizational engagements in KMPs that determines the KMI, rather than the other way around. On this basis, we modeled the KMI as a second-order aggregate or formative construct rather than a superordinate or reflective construct (Edwards & Bagozzi, 2000; Diamantopoulos & Winklhofer, 2001; Jarvis, Mackenzie, & Podsakoff, 2003). When considered as a second-order formative unobservable LV, the level of the KMI is dependent on the summative effect of all four dimensions of KMPs as influenced by the CSFs. The expectation is that, as a formative LV, the KMI will increase or decrease if any of its dimensional components increases or decreases (Diamantopoulos &
156
Winklhofer, 2001; MacKenzie, 2003). Though considered at a higher level as a summation of dimensions (Sethi & King, 1991), this conceptualization of the KMI as a formative LV is in line with the one in which the summation occurs at the item level for each dimension (Asoh et al., 2005). Nevertheless, an important issue to address concerns the nature of the first-order constructs that constitute the second-order KMI model. An accurate specification of first-order models (as either formative or reflective) is one mandatory step toward the conceptualization and development of good constructs (Diamantopoulos & Winklhofer, 2001; Jarvis et al., 2003; MacKenzie, 2003). Within the perspective of multidimensional constructs, the KMI model can be considered as either a type II (reflective first-order and formative second-order) or a type IV (formative first-order and formative second-order) LV model (Jarvis et al., 2003). To decide on the nature of the first-order constructs in the KMI model, we opined on the various decision rules proposed in the literature (e.g. see Diamantopoulos & Winklhofer, 2002; Jarvis et al., 2003; MacKenzie, 2003). Given the definition of the KMI and its dimensions, we opted for reflective first-order constructs and investigated the KMI as a type II LV model, i.e., reflective first-order and formative secondorder. What this means is that the 32 indicators (Q1, Q2…Q32) of Belardo’s Matrix reflect the
Assessing Knowledge Management
Figure 2. KMI-OP LV research model
four first-order dimensions which are then aggregated to form the KMI model as a second-order construct. Therefore, the KMI is depicted as a formative second-order model with the KMPs as dimensions, each of which is a reflective first-order constructs (Figure 1).
reSearcH Model and HYPotHeSeS Since the main objective of this study was to refine and cross-validate the KMI model (Figure 1) it is important to relate the KMI in a nomological network with OP. When organizations are engaged in the KMPs under the influence of specific CSFs, it is expected that the right knowledge will get to the right person at the right time for the execution of right tasks for the attainment of OP (Hibbard, 1997). Therefore, we hypothesized that: H1:The degree to which an organization is engaged in KM is positively correlated with the performance of the organization. In other words, the KMI is positively correlated with OP. The KMI-OP LV research model is presented in Figure 2. In this model, OP was considered in terms of non-financial measures. Non-financial measures have been recognized as good substitutes for objective financial performance (Dess and Robinson, 1984). As KM attempts to enhance the skill level of employees (Bose, 2004), one important non-financial dimension is human resources capabilities (Stivers, Covin, Hall et al., 1998; Hackett, 2000).
And given this specific dimension of OP, the following sub-hypothesis is investigated: H1a: The more an organization is engaged in KM, the more will be the level of development of its human resources capabilities. In other words, the KMI is positively correlated with OP viewed in terms of organizational human resources capabilities. Considering the four dimensions of the KMI, the resulting KMI-OP LV nomological network model of this study is presented in figure 3, where Q1, Q2, …Q32 are the 32 items in the current KMI instrument; and P1, P2, …P16, are OP-related items discussed later under methodology.
MetHodologY KMi instrument In the KMI instrument proposed by Asoh et al., (2002), 32 items are used covering the content areas of KMPs and CSFs identified in Belardo’s Matrix (Table 1). Following two pilot studies (Asoh et al., 2004; Crnkovic et al., 2004), the instrument was modified to ensure clarity and avoid jargon. In the final version of the instrument, respondents were asked to rate: (1) the degree to which each item reflected the organization’s engagement in specific KMPs as influenced by specific CSFs; and, (2) the importance of the concept expressed by each item to the organization. The response on the importance of the concept was used to compute a mean “importance score” required in the item purification process as an “external criterion” (DeVellis, 1991; Spector, 1992). The rationale for the use of this external criterion is that since the “importance score” reflects the value an organization places on its KMPs and CSFs, only item measures that correlate positively and highly with the “importance score” are meaningful and should be retained for
157
Assessing Knowledge Management
Figure 3. The KMI – OP nomological network model
further analysis (Sethi & King, 1991). All items were measured on a 5-point Likert scale ranging from 1 (strongly disagree) to 5 (strongly agree). The 32 KMI items are presented in Appendix A. The OP instrument proposed by Asoh et al., (2004) consisted of 16 items (P1, P2, P3 … P16), reflecting three non-financial OP areas: Goal attainment: P1, P2, P3, P4, P9; Human resources capabilities: P5, P6, P7, P8, P11, P12; and Customer service: P10, P13, P14, P15, P16. Responses to the OP items were made on a 5-point Likert scale ranging from 1 (strongly disagree) to 5 (strongly agree). The 16 items are presented in Appendix B.
data collection, Sample, and descriptive Statistics Data Collection: Data were collected using the survey method from two purposive samples of participants in an international Executive MBA program at a Northeastern U.S. University. The first survey was conducted amongst participants based in the US, the second amongst participants based in Europe (Germany and Switzerland). Each of the surveys was administered by one of the authors in person. Those surveyed were briefed beforehand and participation was voluntary. Since the unit of analysis was the organization, only one participant from each organization was chosen. In lieu of financial incentives, as suggested by
158
(Dillman, 2000), participants were promised a summary of the results. Sample: Of the 52 questionnaires administered to U.S. participants, 38 were useable (response rate=73.1%). Of the 58 questionnaires administered to European participants, 44 were useable (response rate=75.9%). Thus, of the 110 questionnaires administered, 82 yielded useable results (response rate=74.6%). This high response rate can be attributed to the method of administering the survey on the spot; this method also alleviated the need to test for response bias. Descriptive Statistics: Participants in the study came from both the private and public sector. 10 of the 82 (12.2%) were government employees--6 in the US sample and 4 in the European. The remaining 72 (87.8%) were privately employed--32 in the US sample (44.4%), 40 in the European (55.6%). Although many government organizations have embraced KM, we believe that their approach to KM is substantially different from that of private companies. Given our objective of refining and cross-validating the KMI model, we thus excluded the ten government samples from further analysis. Most of the organizations were well established. 54 (75%) were at least 15 years old; only 3 (4%) were less than 2 years old, while 19 (21%) were between 2 and 15 years old. 15 companies (21%) employed fewer than 50 employees; 27 (38%) employed more than 2,000 employees,
Assessing Knowledge Management
while 30 (41%) employed between 50 and 2,000 employees. Annual revenues mirrored organizational size: 16 companies (22%) had revenues of less than US $50 million; 25 (35%) had revenues above $2 billion, while 31 (43%) had revenue between $50 million and two billion. Of the 72 private sector respondents, 11 (15%) were female and 61 (85%) were male; 43 (60%) were mid-level managers and directors; 13 (18%) were upper level executives, i.e., presidents, vice presidents, chief information/knowledge officers; 16 (22%) had “other” titles. 37 (52%) of the respondents had held their current position for at least two years, 35 (48%) for two years or less. 38 respondents (53%) held BAs/BScs, 28 (39%) MAs/MBAs/MScs, 5 (7%) had DBAs/PhDs, and 1 (1%) had another degree.
KMi instrument Purification Equality of Sample Test: A preliminary test was conducted to determine whether responses from the two samples (U.S. and European) were statistically different from each other. Levine’s test for equality of variance indicated that there were no significant differences for any item in the two samples. The items were then subjected to a screening or purification process to eliminate “garbage items”. Purification of KMI Measures: Various approaches have been suggested and used in the purification of items for construct refinement. For example, DeVellis (1991) and Spector (1992) suggest the use of correlational analysis in which each item is correlated with an external criterion variable and only items that significantly correlate with the external variable are retained. To purify items for index construction, it has been suggest that a suitable external variable would be an item that summarizes the essences of the construct that the index purports to measure (Diamantopoulos & Winklhofer, 2001). In the correlational analysis, items that are uncorrelated or negatively correlated with an overall
measure are considered poor items and dropped in the purification process as recommended (DeVellis, 1991; Spector, 1992). Recently, substantive and empirical criteria have been employed for items purification (Matsuno, Mentzer, & Rentz, 2000). The substantive criteria entails examining the breadth of the theoretical content coverage by items while the empirical criteria entails examining descriptive statistics, fit statistics, and correlational and reliability statistics. Items with poor content coverage, low fit indices, or low correlations were deleted in the purification process. Within the sphere of SEM, a split-sample approach to item purification has been used. In this approach data were split into two equal datasets (validation and holdback sample) (Chan, Huff, & Copeland, 1998). Using the validation sample, poorly performing items in the SEM model were dropped in the purification process in which an initial model was revised to obtain a refined model. Using the holdback sample, the resulting refined model was tested for parameter stability without any further modifications or dropping of purified items. More recently, an SEM purification approach using multiple survey samples has been used (Netemeyer, Krishnan, Pullig et al., 2004). Here, items that loaded low (<0.50) or had very high loadings (>0.95) were deleted in the purification process from one sample data to another. We opined on the above research on items purification and employed a mix approach to purify the KMI item measures at two levels. At the first level, we carried out item correlational analysis with the “importance score.” The “importance score” was the external criterion on the KMI instrument; and reflects the value an organization places on specific KMPs and CSFs. Only item measures that correlate positively and highly with the “importance score” were retained for further analysis. As a result of the correlation analysis, the item pool was reduced from 32 to 19 after the first level of the KMI items purification. The initial and pre-final distributions of the KMI items within Belardo’s Matrix are shown
159
Assessing Knowledge Management
Table 2. Distribution of KMI Items (Before, during, and after two quantitative screenings) Knowledge Management Process (KMPs) Identification Critical Success Factors (CSFs)
Dissemination
Utilization
Technology
Q1* Q2*
Q9* Q10
Q17* Q18*
Q25 Q26*
Leadership
Q3 Q4
Q11* Q12
Q19* Q20
Q27* Q28
Culture
Q5 Q6
Q13 Q14
Q21 Q22
Q29 Q30*
Measurement
Q7* Q8
Q15 Q16*
Q23 Q24
Q31 Q32
in Table 2, where the 19 items retained after the first level of the purification process are bolded and underlined. As presented in Table 2, none of the KMPs retained all the eight items across all the CSFs. Although only knowledge elicitation retained seven items across all four CSFs, it is important to note that the other three KMPs retained items, distributed across three CSFs. If all the items along one column (KMPs) or across one row (CSFs) were eliminated through the screening, the scope of the KMI construct would have been changed, and redefining the construct would have been necessary. At the second level of purification, we adopted the SEM item purification approach discussed above (Chan et al., 1998). We split the reduced dataset with 19 items into two equal samples (calibration and validation) as discussed below under KMI Instrument cross-validation. Further purification of the 19 items using the calibration sample in an SEM model resulted in the reduction of the item pool to 12 items (with asterisk in Table 2). The 12 items constituting the final refined KMI instrument used in the SEM measurement and structural models validation and cross-validation are in bold and asterisks (Appendix A) and further discussed in section five. It is worth noting that even though seven items were dropped, and the KMI framework still maintained, testing of the stability and validity of the KMI model with the reduced number of items can only be ascertained through cross-validation in a nomological network.
160
Elicitation
KMi instrument cross-Validation Calibration and Validation Samples: After the first level of item screening, the combined dataset of 19 items was systematically split into two subsamples (calibration and validation or holdout) (Grant & Higgins, 1991; Chan et al., 1998). First, responses with even numbers from both the U.S. and European samples were selected to constitute the calibration sample, while odd-numbered responses were retained to constitute the validation sample. Second, the calibration and validation samples were tested for statistical differences on questionnaire items. The calibration and validation datasets that resulted from this split each had 36 cases. Levine’s test for equality of variance revealed no significant differences between the two datasets for all 19 KMI items as well as seven demographic variables (age, size, and revenue of company, employee title, years in service with company, years in current position, and gender of respondent). A significant difference (2 tail, p=0.005) was noted for the educational level of respondents. This may be explained, in part, by the uneven distribution of respondents across the four educational levels: BA/BS, MBA/MA/MS, DBA/PhD, or other (see demographics). We consider this difference to be an isolated incident when all eight demographic variables are examined. We maintain, therefore, that the calibration and validation samples were not statistically different from each other and could be retained for further analysis. The calibration sample was subsequently used to calibrate, purify,
Assessing Knowledge Management
and test the KMI measurement and structural models while the validation sample was used to cross-validate these models. Cross-validation Approach: After generating the measurement and structural models based on post hoc modification using the calibration sample, we then cross-validated these models using the validation dataset (Grant & Higgins, 1991; Barclay, Higgins, & Thompson, 1995; Chan et al., 1998). Cross-validation is established when the best model obtained using the calibration sample replicates over the validation sample (Bryne, 2001). Since the PLS SEM analytic approach does not model fit indices, replication is judged by comparing the level and significance of: (1) item loadings and weights in the measurement models; (2) paths coefficients; (3) variances explained in the structural models within a nomological network; and (4) overall construct validity for the calibration and validation samples. Cross-validation Nomological Network: Another theoretical question of interest in crossvalidating the KMI is the relationship between the KM and OP. We anticipate that when organizations are engaged in KMPs under the influence of specific CSFs, the right knowledge will get to the right person at the right time for the attainment of OP objectives (Hibbard, 1997). In the current context, relating the KMI to OP would constitute a useful nomological network within which the validity of the refined KMI model can be tested and cross-validated. In line with previous studies (Asoh et al., 2004; Crnkovic et al., 2004), we hypothesize that the KMI is positively correlated with OP, i.e., the degree to which organizations are committed to, and engaged in KM, will be positively and significantly associated with OP. The OP instrument (Appendix B) had 16 items, the mean of which was used in previous research. Since these items represented three OP-related areas (goal attainment, human resources capabilities, and customer services), the logic of the SEM modeling technique requires conceptualizing OP
as a three-dimensional second-order construct. However, the focus of this study was not on the OP construct. In order to keep the nomological network and corresponding analysis simple, as well as minimize the number of parameters to be estimated in the SEM models we decided to use the six items from one dimension of OP (human resources capabilities, HR-Cap). Levine’s test for equality of variance indicated that there were no significant differences for any of the six HR-Cap items (P5, P6, P7, P8, P11, and P12) in the two samples. After SEM analysis using the calibration sample, three items (P5, P6, P7) were retained for use in the cross-validation based on item loadings considerations (bold and asterisk in Appendix B).
Structural equation Modeling (SeM) analytical approach The SEM analytical approach, unlike traditional approaches (e.g., regression analysis), allows us to examine multiple variables simultaneously (Gefen, Straub, & Boudreau, 2000). SEM makes it possible not only to confront a priori theory and hypotheses with empirical data but also to model unobservable latent variables such as the KMI (Fornell, 1982). In this study, we used the Partial Least Squares (PLS) SEM approach. We preferred PLS to other SEM approaches, e.g., LInear Structural RELationships (LISREL), for a number of reasons. First, PLS is more suitable for exploratory/confirmatory research, is predictive, maximizes the variance explained, is more robust to multivariate data distributions, and does not require a large sample size (Chin & Newsted, 1999; Gefen et al., 2000). In fact, PLS has been recommended as the best tool for early stage research in where the theoretical background is still developing and not yet very strong (Falk & Miller, 1992), as is the case here. Second, PLS is flexible with regard to sample size considerations: unlike LISREL, acceptable results
161
Assessing Knowledge Management
can be obtained with small sample sizes (Fornell & Bookstein, 1982; Fornell, 1982). Sample size consideration was important because of the need to partition the dataset to have a hold-out sample for cross-validation purposes. The PLS sample size criterion is based on the portion of the model with the largest number of predictors. While a minimum of 30 to 100 cases often meets power analysis requirements (Chin & Newsted, 1999), the general sample size requirement is considered to be ten times which ever is greater, A or B, where A is the greatest number of formative indicators on any single construct in the model and B is the greatest number of exogenous latent variables impacting any single construct in the model (Chin, 1998). A weak rule of thumb calls for sample sizes of five, rather ten times A or B as being acceptable for PLS analysis (Gopal, Bostrom, & Chin, 1992). With the KMI modeled as a four-dimensional formative construct, a sample size of 40 would be required for the more stringent rule of ten, while a sample size of 20 would be sufficient for the weak rule of five. Although the sample size of 36 for both the calibration and validation samples is slightly less than the required size when the stringent rule of ten is applied, it is more than sufficient when the weak rule of five is applied. In previous research, Lohmoller used a sample size of 10 to estimate a model with 27 variables, and another model with 96 indicators and 26 constructs was estimated with 100 cases (cited in Barcaly et al.,1995). We believe, therefore, that estimated model parameters would not be unduly affected by the sample size, given that we have only 15 indicators and 20 parameters to estimate (see KMI structural model discussed later).
data analysis using the PlS analytical tool We conducted SEM data analysis using PLS Graph version 03.00 Build 1126 (Chin, 2005). Based on our conceptualization of the KMI, we adopted
162
the PLS molar approach (Chin & Gopal, 1995; Iivari, 2005) for the PLS analysis; modeling the KMI as a four-dimensional type II LV construct in a “reflective first-order, formative second-order” configuration (Jarvis et al., 2003). Knowledge identification (IDENT), elicitation (ELICIT), dissemination (DISSEM), and utilization (UTILIZ) were considered as first-order reflective LV, which acted as formative indicators of the second-order KMI construct since they determine the level of the KMI of the organization (Figure 1). Subsequently, we used the two-step approach to SEM (Anderson & Gerbing, 1988) to complete the data analysis using the calibration and validation samples in separate analyses. First, the calibration sample was used to investigate the measurement model which depicts the relationship between the items and their respective constructs (IDENT, ELICIT, DISSEM, and UTILIZ for KMI and human resources capabilities (HR-Cap). Second, the calibration sample was used to investigate the KMI structural model in a nomological network with OP. The foregoing analyses were repeated using the validation sample. In both the calibration and validation analysis, we used a bootstrap sample of 200 to estimate the parameters. For the calibration and validation models, we assessed the measurement and structural models following the statistical measurement modeling approach (Segars, 1997) and the guidelines for validating research instruments (Straub et al., 2004). Specifically, we assessed (1) reliability of measurements within the KMI construct dimensions and the HR-Cap construct and (2) validity of measurements between the KMI construct dimensions and the HR-Cap construct. In terms of reliability, we assessed (1) the internal consistency reliability and (2) unidimensionality of items; and in terms of validity, we assessed (1) convergent validity, (2) discriminant validity, and (3) nomological validity. The results of these assessments are presented and discussed in the next section.
Assessing Knowledge Management
Figure 4. Measurement models for calibration and validation samples (all loadings significant at p=0.1level)
reSultS and diScuSSion Measurement Models The essence of assessing the measurement model is to ensure that the constructs are accurately measured. The measurement models for the calibration and validation samples are shown in Figure 4. All loadings in both samples were significant at the p=0.1 level. We consider the SEM measurement model as a form of confirmatory analysis (Loehlin, 2004) through which the reliability (i.e. internal consistency reliability and unidimensionality) of construct items can be assessed before assessing the constructs themselves. Unidimensionality of an item means the item reflects only one underlying construct (Segars, 1997). Without acceptable unidimensionality, it is not meaningful to talk about statistics such as composite scores (Gerbing & Anderson, 1988) used in previous KMI studies or other statistics that involve manipulation of items to make judgments on constructs. In effect, unidimensionality is a necessary condition that should precede both reliability and validity con-
siderations in that order (Segars, 1997; Ping Jr., 2004). The verification criterion for unidimensionality is that a set of items measuring a given construct or sub-construct should exhibit a “parallel correlational pattern” with other sets of items measuring other constructs or sub-constructs (Segars, 1997) (p. 109). To assess unidimensionality of the measurement models in the calibration and validation samples, we computed a table of item loadings and cross-loadings on the various constructs for both samples (Figure 5). According to the unidimensional criterion, an item should significantly load only on the latent construct to which it is assigned. The parallel correlational pattern is evident in Figure 5 for the items assigned to each construct, indicating that all items are unidimensional except one item, Q26, which loaded high on two constructs (DISSEM and UTILIZ) in the calibration sample. This item however loaded high only on the UTILIZ construct in the validation sample as expected. When we examined the item (Q26) which read “my organization employs technology that makes the utilization of knowledge resources transparent to all,” we noted that the item was meant to capture
163
Assessing Knowledge Management
Figure 5. Item cross-loadings on constructs demonstrating unidimensionality
the impact of the technology CSF on the knowledge utilization KMP. It is possible that the words “transparent to all” might have been interpreted in the sense of dissemination by most respondents within the calibration sample who also associated the item with knowledge utilization as expected. Furthermore, the relatively very low loading of the same item on the DISSEM construct in the validation sample and its persistent high loading on the UTILIZ construct in the same sample seem to suggest some possible hidden variation between the two samples rather than specific problems with the item. We decided to maintain the item. Having established unidimensionality of the items, we proceeded to look at items loading and reliability. The loadings and weights of the items in the measurement models are indicated in Figure 4, with numbers in brackets representing weights. All item loadings in both calibration and validation samples were significantly related to their respective constructs at or above the p=0.01 level. The loadings were also within the limit or
164
exceeded the threshold value of 0.70 indicating good reliability with the exception of item Q19 in the calibration sample which loaded at 0.52. Although some researchers (Yang, Cai, Zhou et al., 2005) have retained items with loadings as low as 0.47 in cross-validation studies, we considered dropping the Q19 item since it was far below the threshold value of 0.70. However, this item loaded as expected (0.81) in the validation sample. Barclay et al., (1995) pointed out that it is not uncommon to find items load below the 0.70 threshold when newly developed instruments are used in SEM modeling. We attributed the fluctuation to possible slight variation in interpretation of the item by the US and European samples rather than problems with the item and therefore maintained the item. In addition, maintaining the item also ensured balanced item blocks in the models. The means, standard deviations, internal consistency, construct inter-correlations, convergent validity, and discriminant validity are presented in Table 3 and discussed in the following paragraphs.
Assessing Knowledge Management
Table 3. Construct means, standard deviations, consistencies, validity, inter-correlation Inter-correlation of latent constructs and square root of average variance extracted (SQR (AVE)) in diagonalsa .
Internal Consistency
Mean
Standard Deviation
Cronbach Alpha
Composite Reliability
IDENT
3 3
8.04 7.40
2.39 2.75
0.61 0.73
0.80 0.85
0.75 0.81
ELICITa
3 3
8.34 8.40
2.81 2.32
0.74 0.68
0.85 0.83
0.63 0.49
0.81 0.79
DISSEMa
3 3
9.50 9.86
2.73 3.20
0.64 0.83
0.81 0.90
0.46 0.61
0.46 0.28
0.77 0.86
UTILIZa
3 3
7.77 7.75
2.58 2.44
0.54 0.57
0.77 0.78
0.44 0.60
0.58 0.56
0.44 0.33
0.72 0.74
HR-CAPa
3 3
9.44 8.58
2.91 2.73
0.68 0.76
0.83 0.86
0.30 0.35
0.30 0.31
0.40 0.20
0.39 0.41
No. of items
a
IDENT
Construct
a
ELICIT
DISSEM
UTILIZ
HR-CAP
0.78 0.82
In each cell: Upper number: calibration sample; lower number: validation sample.
Since loadings are correlations, the reliability of an item can also be assessed as the square of the item loading. An item is considered reliable if its squared loading is greater than or equal to 0.50, meaning 50 percent of the variance in the observed variable is associated with the construct for which the variable is a measure. The squared loadings of all items were within reasonable limits or exceeded the 0.50 threshold in the calibration and validation samples (with the exception of item Q19 whose reliability fluctuated between the two samples) again providing evidence of acceptable item reliability. Internal consistency is often examined in support of the reliability of items through the Fornell and Larcker Measure (Fornell & Larcker, 1981). This statistic which is computed as the sum of the loadings, all squared, divided by the sum of the loadings, all squared, plus the sum of the error terms, is directly obtainable as the composite reliability from the PLS Graph software. For adequate reliability based on internal consistency, the composite reliability should be greater than or equal to 0.70 (Fornell & Larcker, 1981). All composite reliabilities for item constructs in the calibration and validation samples were well above the 0.70 threshold. The composite reliabilities for
the calibration sample varied between 0.77 and 0.85 while those for the validation sample varied between 0.78 and 0.90. Cronbach Alpha, a traditional measure of reliability in non-SEM research was computed for comparison with the composite reliability. An Alpha value of 0.70 is considered moderate and acceptable for early stage research (Nunnally, 1978). The Alpha values for the constructs in the calibration sample varied from 0.54 to 0.74 while those for the constructs in the validation sample varied from 0.57 to 0.83. While each construct had a moderate Alpha value above the benchmark value (0.70) in either the calibration or validation sample, this was not the case for the UTILIZ construct which had low values (0.54 and 0.57) in both samples. We explain the low Alpha value for the UTILIZ to be the result of a possible offending contribution of item Q26 which loaded high on DISSEM as previously discussed. Nevertheless, a comparison of Alpha measures and composite reliability measures shows that Alpha measures seem to suggest the KMI is far from being in its best form. Not withstanding, it is worth noting that the Alpha measure has been noted as not being the best measure of reliability compared to the
165
Assessing Knowledge Management
composite measure by the Fornell and Larcker Measure. The argument is that the Fornell and Larcker Measure is superior because it uses items loadings estimated in a causal model, and in which the item loadings are not assumed to have equal weights as is the case with the Alpha measure (Fornell & Larcker, 1981). In addition, the Alpha measure is sensitive to the number of items in a scale, which is not the case for the Fornell and Larcker Measure, which is considered more universal (Barclay et al., 1995). Equally, Alpha is sensitive to unidimensionality. Deviations from unidimensionality turn to increase the estimate of Alpha (Shevlin, Miles, Davies et al., 2000). As already observed, the KMI constructs exhibit high unidimensionality (Figure 5), which may explain the low values of Alpha. Convergent validity refers to the extent to which a set of items thought to reflect a given construct converge or show high correlations with one another (Straub et al., 2004) and act as if they were measuring the underlying construct because they shared a common variance. The statistic is depicted as the ratio of the amount of variance of the set of items captured by the underlying construct to the total variance of the construct, including variance due to measurement errors; and is measured by the average variance extracted (AVE) (Fornell & Larcker, 1981). An AVE of less than 0.5 is judged unsatisfactory since more variance in the construct is attributable to errors. The AVEs are directly obtained from the PLS-Graph software. Both the calibration and validation samples exhibit adequate convergent validity for the constructs since all AVEs (the square of values in the diagonals of Table 3) are above the 0.50 threshold: from 0.53 to 0.65 (calibration sample) and from 0.54 to 0.74 (validation sample). Discriminant validity indicates the extent to which one construct is different from other constructs (Grant & Higgins, 1991; Barclay et al., 1995; Hulland, 1999). The criterion for discriminant validity is that the average variance extracted
166
(AVE), i.e., the average variance shared between a construct and its measures should be greater than the variance shared between the construct and other constructs in the model (Fornell & Larcker, 1981; Barclay et al., 1995; Chin, 1998; Hulland, 1999; Iivari, 2005). In other words, the average variance shared between a construct and its measure must be greater than the squared correlation between two constructs. In order to judge on the nature of the discriminant validity, the square of the average variance extracted (SQR(AVE)) is placed in the diagonal position of the correlation matrix for the constructs under consideration. For adequate discriminant validity, the elements in the diagonal position (SQR(AVE)) should be greater than the off-diagonal elements in the corresponding row and columns. The diagonal elements exceeded all the off-diagonal elements in both the calibration and validation samples. In Table 3, the diagonal elements for the calibration sample (upper value in the diagonal cells) are bolded.
Structural Models The structural models for the calibration and validation samples are presented in Figure 6. We assessed the structural models by examining the significance of: (1) path coefficients among the constructs; and, (2) variance explained (Falk & Miller, 1992). To obtain the path coefficients, we conducted a bootstrap analysis with an initial sample size of 200 as recommended by Chin (2005). The PLS software provides t-statistics for the path coefficients. For both the calibration and validation samples, all path coefficients were significant at the p=0.1 level or higher. To ensure more confidence in the stability of the path coefficients, we also assessed the structural model by considering the significance of variance explained based on the F-statistics (Falk & Miller, 1992) with the F-statistics computed as:
Assessing Knowledge Management
Figure 6. Structural models for calibration and validation samples (all paths significant at p=0.1level)
Table 5. Construct contribution to variance explained and significance Path IV => DV a
N=36; m
Loading
Correlation
[Partial] R2
F(m, N-m1)
Critical F(m,N-m-1)
R2 p-level (2 tail Sign.)
KMI => HRCAP
1b 1
0.45 0.53
0.45 0.53
0.20 0.28
8.61 12.90
7.44 9.01
0.01 0.005
IDENT=> KMI
4 4
0.28 0.37
0.80 0.89
0.23 0.33
2.28 3.38
2.12 3.19
0.1 0.025
ELICIT => KMI
4 4
0.35 0.28
0.89 0.74
0.31 0.21
3.42 2.10
3.19 2.12
0.025 0.1
DISSEM=> KMI
4 4
0.32 0.34
0.79 0.75
0.25 0.25
2.56 2.66
2.12 2.12
0.1 0.1
UTILIZ => KMI
4 4
0.27 0.27
0.82 0.77
0.22 0.20
2.17 1.99
2.12 2.12
0.1 ns
IV is exogenous & DV is endogenous construct.b Upper number in each cell is for calibration sample while lower number is for validation sample.
a
F=
R2 m
(1 − R ) (N − m − 1) 2
,
where R2 is contribution to variance explained, N is the sample size used in the model, with N-1 degree of freedoms, and m is the number of predictors of the construct. The R2 value (0.20) for the KMI=>HR-Cap path for the calibration sample was significant at the p=0.01 level, while the R2 value (0.28) for the same path in the validation sample was significant at the p=0.005 level.
The results of these assessments are presented in Table 5. We also investigated the partial contributions of each of the four dimensions of the KMI model to the value of the KMI and the significance of such contributions. ELICIT made the greatest contribution (31%) to the KMI in the calibration sample while IDENT made the greatest contribution (33%) in the validation sample. UTILIZ made the smallest contribution in both samples: 22%
167
Assessing Knowledge Management
and 20%, respectively, in calibration and validation. All partial contributions were significant in both samples at the p=0.1 level or above, except UTILIZ whose partial contribution was narrowly non-significant at the p=0.1 level in the validation sample (see Table 5). While the non-significance of the contribution of UTILIZ in the validation sample raises some questions, Falk & Miller (1992) maintain that, between values of significance and values of variance explained, preference should be given to variance explained. According to these authors, variances explained should be greater than or equal to 0.10; interpreting variances of less than 0.10, even if statistically significant, offers little or no benefit. In fact, Falk and Miller strongly argue that a predictor should only be maintained in a model if the contribution made by that predictor is at least 1.5% of the total variance of the predicted variable. Given the conceptualization of the KMI as a formative construct, the four dimensions are predictors of the KMI. Evidently, the 20% contribution of UTILIZ in the validation model is more than ten times the minimum required contribution (1.5%) advocated by Falk & Miller (1992). These considerations alleviate any worries about the nature and stability of the four dimensions of the KMI in both the calibration and validation samples.
concluSion Summary of Study and results The main purpose of this study was to refine and cross-validate the KMI model proposed by Asoh et al., (2002) so that a robust model can be available for both researchers and practitioners. In refining and cross-validating the KMI model, we used empirical data to verify two hypotheses. First, that the KMI is a multidimensional construct, and second, that the KMI is significantly and positively correlated with OP.
168
We rationalized the multi-dimensional perspective of the KMI on the grounds that the KMPs (identification, elicitation, dissemination, and utilization) are distinct from each other and can be measured using different items even if they are impacted by the same or different CSFs (technology, leadership, culture, and measurement). In addition, we also maintained that KM is a multi-faceted organizational phenomenon that cannot be effectively studied using a reductionism approach based on the mean of responses to questionnaire items. For the refinement and validation we employed quantitative criteria at two levels (correlational analysis and SEM analysis) using empirical data collected from U.S. and European samples to refine and reduce the initial pool of 32 items in the KMI instrument to 12 items. We further investigated and compared the psychometric properties of the 12 item refined version of the KMI instrument with a calibration sample and cross-validated the model using a validation sample. Results of our analysis confirmed the multidimensionality of the KMI. Each of the four dimensions significantly contributed to the KMI. We also found that the psychometric properties of both the calibration and validation samples were within acceptable limits as prescribed in the SEM literature. The validation sample faithfully replicated the properties of the calibration sample, thereby confirming cross-validation of the KMI model. Furthermore, in both the calibration and validation samples, the KMI was found to be positively and significantly related to OP by virtue of the positive and significant path coefficient between the KMI and HR-cap as well as the significant variance explained (Table 5). The findings of this study therefore confirm similar findings from previous research by Asoh et al., (2004) and Crnkovic et al., (2004).
Assessing Knowledge Management
research contributions and implications KM is an emerging field and developing and using constructs is an important step in the development and advancement of theory in the field. In order not to re-invent the wheel, researchers are urged to use existing constructs in theory development. Such an approach makes it possible for comparative evaluation of research results. However, it is important to know the properties of scales or indexes developed to measure a construct before deciding to use the construct (Matsuno et al., 2000) since results obtained from using inadequate constructs can be misleading and detrimental to the development of theory and advancement of knowledge (MacKenzie, 2003; MacKenzie, Podsakoff, & Jarvis, 2005). This study successfully refined and cross-validated the KMI models using the SEM approach via PLS. Post hoc model modifications and adjustments are common practices in SEM analysis. As Loehlin (2004) points out, once a model has been modified or adjusted on the basis of its fit or lack of fit with a given dataset, “its statistical status is precarious until it can be tested on a new body of data that did not contribute to the adjustment” (p. 234). The study cross-validates the KMI model. Crossvalidation of research models is important because it not only alleviates any concerns regarding model specification (Rigdon, 1998; Loehlin, 2004) but, more importantly, demonstrates that the KMI model can “generate consistent results, and will thus be of practical value in making predictions among members of the reference population upon which the model is based.” (Sheskin, 2004) (p. 1002). A refined and cross-validated KMI model makes for easy and confident replication of this study in future research. Related assessments of KM have focused on the development of scales (e.g. Darroch, 2003; Lee et al., 2005). This study differentiates itself from the others by casting and investigating the KMI as a formative latent variable, thereby applying
to the KM field research on formative measures and index creation gleaned in other fields (Fornell, Lorange, & Roos, 1990; Diamantopoulos & Winklhofer, 2001; Arnett, Laverie, & Meiers, 2003).
Management implications This study revealed the positive and significant relationship between the KMI and OP. Specifically, the study verified previous research on the predictive validity of the KMI in the nomological network with OP, with OP considered in nonfinancial terms of HR capabilities. The study contributes to management understanding of the possibility to predict OP based on organizational KM efforts. Given the definition of the KMI and the positive correlation between the KMI and OP, managers would note that greater engagement in KM would lead to greater accrued or expected KM benefits and consequently higher accrued or expected OP. In addition, the study revealed that although all four KMPs contribute significantly and positively to the value of the KMI, knowledge identification, elicitation, and dissemination seem to contribute more (respective averages of 28%, 26%, and 25% for calibration and validation samples) compared to knowledge utilization (average 21%). While companies stand to benefit more when knowledge is used, the lower contribution of knowledge utilization despite high knowledge identification, elicitation, and dissemination may seem to suggest that organizations have to pay more attention to knowledge utilization. When we examined the 12 items retained for the refined KMI within Belardo’s Matrix, we found that three CSFs (technology, leadership, and measurement) impacted at two or more KMPs while one CSF (culture) impacted only one KMP (utilization). Although one interpretation may be that culture is not as important as other CSF when it comes to knowledge identification, elicitation, and dissemination, we believe a contrary interpretation is in order: managers should rather focus greater
169
Assessing Knowledge Management
attention on the culture of knowledge identification, elicitation, and dissemination. This ensures that the contribution of culture is felt when it comes to anticipating KM benefits since the KMI is a formative, rather than a reflective construct. The KMI model with the refined 12 item instrument should appeal to managers. Managers will be able to easily use the new instrument to assess the degree of their organizational commitment to, and engagement in KM. Such preliminary assessments would further help managers understand and anticipate potential KM benefits. Those organizations that are able to identify the knowledge they need, acquire it, and disseminate it so that it can be utilized in business operations, will increasingly be able to appreciate and eliminate knowledge gaps in order to improve KM benefits and ultimately OP.
limitations and directions of future research The research described herein has a number of limitations. First, two configurations of the KMI model (first-order unidimensional and second-order multidimensional models) were discussed but only the multidimensional model was considered most appropriate and investigated. Our position does not exclude the possibility of a first-order formative index for assessing KM. Such an index could be investigated if the KMI is not defined in terms of KMPs, which we maintain are distinctive from each other and constitute individual constructs. Second, OP is multi-faceted. Only one facet of OP associated with human resources development capabilities, was considered when testing the KMI model in a nomological network. Future research, would investigate the relationship between the KMI and other facets of OP. Third, the item reduction process almost resulted in the elimination of culture as a CSF. Only one item in the reduced scale relates culture to one of the KMPs (knowledge utilization).
170
We believe the impact of culture as a CSF is not limited to knowledge utilization only. Even though the model based on the reduced items was successfully cross validated, future research should consider augmenting the items to ensure a balanced representation of the CSFs. Fourth, although an international sample was used in the study and the KMI model was found to be stable both in the calibration and validation samples, the results cannot be generalized to all settings without further testing. Fifth, the generalizability of the findings of the study is also limited in view of the study’ sample size, which is based on the weak, rather than stringent rule of thumb. Another direction for future research is replication studies with larger sample sizes.
referenceS Alavi, M., & Leidner, D. E. (2001). Knowledge management and knowledge management systems: Conceptual foundations and research issues. Management Information Systems Quarterly, 25(1), 107–136. doi:10.2307/3250961 Amabile, T. (1997). Motivating creativity in organizations: On doing what you love and loving what you do. California Management Review, 40(1), 39–58. Anantatmula, V. S. (2005). Outcomes of Knowledge Initiatives. International Journal of Knowledge Management, 1(2), 50–67. Anderson, J. C., & Gerbing, D. W. (1988). Structural equation modeling in practice: A review and recommended two-step approach. Psychological Bulletin, 103, 411–423. doi:10.1037/00332909.103.3.411 April, K. A. (2002). Guidelines for developing a K-strategy. Journal of Knowledge Management, 6(5), 445–456. doi:10.1108/13673270210450405
Assessing Knowledge Management
Arnett, D. B., Laverie, D. A., & Meiers, A. (2003). Developing parsimonious retailer equity indexes using partial least squares analysis: A method and applications. Journal of Retailing, 79, 161–170. doi:10.1016/S0022-4359(03)00036-8 Arthur Andersen. (1996). The knowledge management practices book: A guide to who’s doing what in organizational knowledge management. Arthur Andersen Consulting. Asoh, D., Belardo, S., & Crnkovic, J. (2002). Modeling and constructing the Knowledge Management Index of organizations. Paper presented at the 6th World Multiconference on Systemics, Cybernetics, and Informatics (SCI 2002), Orlando, FL. Asoh, D., Belardo, S., & Crnkovic, J. (2004). The relationship between the Knowledge Management Index and organizational performance: A preliminary empirical analysis. Paper presented at the 15th International Information Resource Management Association Conference (IRMA 2004), New Orleans, Louisiana.
Bontis, N. (2001). Assessing knowledge assets: A review of the models used to measure intellectual capital. International Journal of Management Reviews, 3(1), 41–60. doi:10.1111/1468-2370.00053 Bose, R. (2004). Knowledge management metrics. Industrial Management & Data Systems, 104(6), 457–468. doi:10.1108/02635570410543771 Boudreau, M., Gefen, D., & Straub, D. W. (2001). Validation of IS research: A state-of-the-art assessment. Management Information Systems Quarterly, 25(1), 1–24. doi:10.2307/3250956 Brown, R. B., & Woodland, M. J. (1999). Managing knowledge wisely: A case study in organizational behavior. Journal of Applied Management Studies, 8(2), 175–198. Bryne, B. M. (2001). Structural Equation Modeling with AMOS: Basic concepts, applications, and programming. Mahwah, NJ: Lawrence Erlbaum Associates. Chan, I., & Chau, P. Y. K. (2005). Getting knowledge management right: lessons from failure. International Journal of Knowledge Management, 1(3), 40–54.
Asoh, D. A., Belardo, S., & Crnkovic, J. (2005). Computing the Knowledge Management Index: Validation of the instrument. Paper presented at the 16th Information Resources Management Association Conference (IRMA 2005), San Diego, California.
Chan, Y. E., Huff, S. L., & Copeland, D. G. (1998). Assessing realized information systems strategy. Strategic Information Systems, 6, 273–298. doi:10.1016/S0963-8687(97)00005-X
Barclay, D., Higgins, C., & Thompson, R. (1995). The Partial Least Square (PLS) approach to casual modeling: Personal computer adoption and use as an illustration. Technology Studies, 2(2), 285–309.
Chin, W. W. (1998). The partial least squares approach to structural equation modeling. In Marcoulides, G. A. (Ed.), Modern Methods for Business Research (pp. 295–336).
Barney, J. B. (1991). Firm resources and sustained competitive advantage. Journal of Management, 17(1), 99–120. doi:10.1177/014920639101700108
Chin, W. W. (2005). PLS Graph. Soft Modeling, Inc.
Barney, J. B. (2001). Is the resource-based “view” a useful perspective for strategic management research? Yes. Academy of Management Review, 26(1), 41–56. doi:10.2307/259393
Chin, W. W., & Gopal, A. (1995). Adoption intention in GSS: Relative importance of beliefs. The Data Base for Advances in Information Systems, 26(2&3), 42–63.
171
Assessing Knowledge Management
Chin, W. W., & Newsted, P. R. (1999). Structural equation modeling analysis with small samples using Partial Least Squares. In Marcoulides, G. A. (Ed.), Modern Methods for Business Research (pp. 307–341). Comer, J. M., Machleit, K. A., & Lagace, R. R. (1989). Psychometric assessment of a reduced version of INDSALES. Journal of Business Research, 18(4), 291–302. doi:10.1016/01482963(89)90023-4 Crnkovic, J., Belardo, S., & Asoh, D. (2004). The Knowledge Management Index as a micro level organizational diagnostic tool: Analysis and illustrations with data from a pilot study. Paper presented at the 9th World Multiconference on Systemics, Cybernetics, and Informatics (SCI 2004), Orlando, Florida. Darroch, J. (2003). Developing a measure of knowledge management behaviors and practices. Journal of Knowledge Management, 7(5), 41–54. doi:10.1108/13673270310505377 Davenport, T. H., De Long, D. W., & Beers, M. C. (1998). Successful knowledge management projects. Sloan Management Review, (Winter): 43–57. De Long, D. W., & Fahey, L. (2000). Diagnosing cultural barriers to knowledge management. The Academy of Management Executive, 14(4), 113–127. DeVellis, R. F. (1991). Scale development: Theory and applications. Newbury Park, CA: Sage. Diamantopoulos, A., & Winklhofer, H. M. (2001, May). Index construction with formative indicators: An alternative to scale development. JMR, Journal of Marketing Research, xxxxvlll, 269–277. doi:10.1509/jmkr.38.2.269.18845 Dillman. (2000). Mail and Internet Survey: The tailored design method. New York: John Wiley & Sons.
172
Edwards, J. R., & Bagozzi, R. P. (2000). On the nature and direction of the relationship between constructs and measures. Psychological Methods, 5, 155–174. doi:10.1037/1082-989X.5.2.155 Falk, R. F., & Miller, N. B. (1992). A premier for Soft Modeling. Akron, OH: The University of Akron. Fornell, C., & Bookstein, F. (1982). Two structural equation models: LISREL and PLS applied to consumer exit-voice theory. JMR, Journal of Marketing Research, 19, 440–452. doi:10.2307/3151718 Fornell, C., & Larcker, D. (1981). Evaluating structural equation models with unobservable variables and measurement errors. JMR, Journal of Marketing Research, 18, 39–50. doi:10.2307/3151312 Fornell, C., Lorange, P., & Roos, J. (1990, Oct.). The cooperative venture formation process: A latent variable structural modeling approach. Management Science, 36, 1246–1255. doi:10.1287/ mnsc.36.10.1246 Fornell, C. R. (1982). A Second Generation of Multivariate Analysis.: Vol. I. Methods. New York: Praeger. Gefen, D., Straub, D. W., & Boudreau, M.-C. (2000). Structural equation modeling and regression: Guidelines for research practice. Communications of the Association for Information Systems, 4(Article 7), 1-77. Gerbing, D. W., & Anderson, J. C. (1988). An updated paradigm for scale development incorporating unidimensionality and its assessment. JMR, Journal of Marketing Research, 25, 186–192. doi:10.2307/3172650 Gopal, A., Bostrom, R. P., & Chin, W. W. (1992). Applying adaptive structuration theory to investigate the process of group support systems use. Journal of Management Information Systems, 9(3), 45–69.
Assessing Knowledge Management
Grant, R. A., & Higgins, C. A. (1991). The impact of computerized performance monitoring on service work: Testing a casual model. Information Systems Research, 2(2), 116–142. doi:10.1287/ isre.2.2.116 Grant, R. M. (1996). Toward a knowledge-based theory of the firm. Strategic Management Journal, 17, 109–122. Grover, V., & Davenport, T. (2001). General perspectives on knowledge management: Fostering a research agenda. Journal of Management Information Systems, 18(1), 5–22. Hackett, B. (2000). Beyond Knowledge Management: New ways to work and learn.: The Conference Board. Hansen, M., Nohira, N., & Tierney, T. (2001). In In, H. B. R. (Ed.), What’s your strategy for managing knowledge? (pp. 61–86). Boston: Harvard Business Review on Organizational Learning. Hendriks, P., & Vriens, D. (1999). Knowledgebased systems and knowledge management: friends or foes? Information & Management, 35(2), 113–125. doi:10.1016/S0378-7206(98)00080-9 Hibbard, J. (1997, Oct.). Knowing what we know. InformationWeek, 653, 46–54. Hulland, J. (1999). Use of partial least squares (PLS) in strategic management research: A review of four recent studies. Strategic Management Journal, 20, 195–204. doi:10.1002/ (SICI)1097-0266(199902)20:2<195::AIDSMJ13>3.0.CO;2-7 Iivari, J. (2005). An empirical test of the DeLoneMcLean Model of information system success. The Data Base for Advances in Information Systems, 36(2), 8–26.
Jarvis, C. B., Mackenzie, S. B., & Podsakoff, P. M. (2003, September). A critical review of construct indicators and measurement model misspecification in marketing and consumer research. The Journal of Consumer Research, 30, 199–218. doi:10.1086/376806 Kankanhalli, A., & Tan, B. C. Y. (2005). Knowledge management metrics: A review and directions for future research. International Journal of Knowledge Management, 1(2), 20–32. Kim, S., & Hagtvet, K. A. (2003). The impact of misspecified item parceling on representing latent variables in covariance structure modeling: A simulation study. Structural Equation Modeling, 10(1), 101–127. doi:10.1207/ S15328007SEM1001_5 Lee, K. C., Lee, S., & Kang, I. W. (2005). KMPI: Measuring knowledge management performance. Information & Management, 42, 469–482. doi:10.1016/j.im.2005.10.003 Little, T. D., Cunningham, W. A., Shahar, G., & Widaman, K. F. (2002). To parcel or not to parcel: Exploring the question, weighing the merits. Structural Equation Modeling, 9(2), 151–173. doi:10.1207/S15328007SEM0902_1 Loehlin, J. C. (2004). Latent Variable Models: An introduction to factor, path, and structural equation analysis. Mahwah, NJ: Lawrence Erlbaum Associates. MacKenzie, S. B. (2003). The dangers of poor construct conceptualization. Journal of the Academy of Marketing Science, 31(3), 323–326. doi:10.1177/0092070303031003011 MacKenzie, S. B., Podsakoff, P. M., & Jarvis, C. B. (2005). The problem of measurement model misspecification in behavioral and organizational research and some recommended solutions. The Journal of Applied Psychology, 90(4), 710–730. doi:10.1037/0021-9010.90.4.710
173
Assessing Knowledge Management
Matsuno, K., Mentzer, J. T., & Rentz, J. O. (2000). A refinement and validation of the MARKOR scale. Journal of the Academy of Marketing Science, 28(4), 527–539. doi:10.1177/0092070300284005 Netemeyer, R. G., Krishnan, B., Pullig, C., Wang, G., Yagei, M., & Dean, D. (2004). Developing and validating measures of facets of customer-based brand equity. Journal of Business Research, 57, 209–224. doi:10.1016/S0148-2963(01)00303-4 Nunnally, J. (1978). Psychometric theory (2nd ed.). New York: McGraw Hill. O’Dell, C. (2004). The executive role in knowledge management: American Productivity and Quality Center. APQC. O’Dell, C., Wiig, K., & Odem, P. (1999). Benchmarking unveils emerging knowledge management strategies. Benchmarking: An International Journal, 6(3), 202–211. doi:10.1108/14635779910288550 Ping, R. A. Jr. (2004). On assuring valid measures for theoretical models using survey data. Journal of Business Research, 2004, 125–141. doi:10.1016/ S0148-2963(01)00297-1 Quaduss, M., & Xu, J. (2005). Adoption and difussion of knowledge management systems: Field studies of factors and variables. Knowledge-Based Systems, 18(2001), 107-115. Rigdon, E. E. (1998). Structural equation modeling. In Marcoulides, G. A. (Ed.), Modern Methods for Business Research (pp. 251–293). Ruggles, R. (1998). The state of the notion: Knowledge management in practice. California Management Review, 40(3), 80–89. Schein, E. H. (2001). Defining organizational culture. In Classics of Organization Theory (pp. 369–379). Fort Worth, FL: Harcourt College.
174
Segars, A. (1997). Assessing the unidimensionality of measurement: A paradigm and illustration within the context of information systems research. Omega. International Journal of Management Science, 25(1), 107–121. Sethi, V., & King, W. R. (1991). Construct measurement in information systems research: An illustration in strategic systems. Decision Sciences, 22(3), 455–464. doi:10.1111/j.1540-5915.1991. tb01274.x Sher, P. J., & Lee, V. C. (2004). Information technology as a facilitator for enhancing dynamic capabilities through knowledge management. Information & Management, 41(8), 933–945. doi:10.1016/j.im.2003.06.004 Sheskin, D. J. (2004). Handbook of parametric and nonparametric statistical procedures. (3 ed.). Boca Raton, FL: Chapman & Hall/CRS. Shevlin, M., Miles, J. N. V., Davies, M. N. O., & Walker, S. (2000). Coefficient alpha: A useful indicator of reliability? Personality and Individual Differences, 28, 229–237. doi:10.1016/S01918869(99)00093-8 Shin, M. (2004). A framework for evaluating economics of knowledge management systems. Information & Management, 42, 179–196. Spector, P. E. (1992). Summated ratings scales construction. Newbury Park, CA: Sage. Stivers, B. P., Covin, T. J., Hall, N. G., & Smalt, S. W. (1998). How nonfinancial performance measures are used. Management Accounting (USA), 79(8), 44–48. Straub, D., Boudreau, M.-C., & Gefen, D. (2004). Validation guidelines for IS positivist research. Communications of the ACM, 13, 380–427. Tsui, E. (2005). The role of IT in KM: Where are we now and where are we heading? Journal of Knowledge Management, 9(1), 3–6. doi:10.1108/13673270510584198
Assessing Knowledge Management
von Krogh, G. (1998). Care in knowledge creation. California Management Review, 40(3), 133–153. Yang, Z., Cai, S., Zhou, Z., & Zhou, N. (2005). Development and validation of an instrument to measure user perceived service quality of information presenting web portals. Information & Management, 42, 575–589. doi:10.1016/S03787206(04)00073-4
Zyngier, S. (2003). The role of information technology in knowledge management strategies in Australia: Recent trends. Journal of Information and Knowledge Management, 2(2), 165–178. doi:10.1142/S0219649203000061
175
Assessing Knowledge Management
aPPendix a: KMi inStruMent (q1, q2, q3 … q32). Instruction: In this section, a statement is made concerning some aspects of knowledge management in your organization. You should indicate how important (IMP) you think the aspect is, and how effective (EFT) it is currently being experienced. Your response on the importance and effectiveness should be as follows: SD-Strongly Disagree, D-Disagree, N-Not Sure, A-Agree, and SA-Strongly Agree. Mark the appropriate response corresponding to your answer, against IMP and EFT. Q# 1
(In) my organization, agency, or department: provides employees with appropriate technology tools to identify critical knowledge for business activities as required*.
2
has a strategic program in place to identify, collect and analyze business intelligence information to develop business strategy*.
3
management is committed to the identification of the right knowledge for organization business, demonstrates commitment and action in knowledge management policy, guidelines and activities.
4
management constantly reviews and acts on opportunities for appropriate alliances and joint ventures to increase the organization’s intellectual capital.
IMP
SD
D
N
A
SA
EFT
SD
D
N
A
SA
IMP
SD
D
N
A
SA
EFT
SD
D
N
A
SA
IMP
SD
D
N
A
SA
EFT
SD
D
N
A
SA
IMP
SD
D
N
A
SA
EFT
SD
D
N
A
SA
IMP
SD
D
N
A
SA
EFT
SD
D
N
A
SA
5
is open to ideas and knowledge from all employees, irrespective of status.
6
is open to ideas and knowledge from other organizations, agencies, departments, or disciplines.
IMP
SD
D
N
A
SA
EFT
SD
D
N
A
SA
7
employs some means of determining the percentage of knowledge required and received for its business processes*.
IMP
SD
D
N
A
SA
EFT
SD
D
N
A
SA
8
Regularly identifies, reviews, and deletes out-of-date information and ensures updates from designated information owners.
IMP
SD
D
N
A
SA
9
uses technological tools to create opportunities for employees to contribute knowledge in the form of tips to others*.
10
makes wide use of electronic documentation, cataloguing and archiving practices.
11
management actively promotes behaviors that enable knowledge owners to put knowledge at the service of others*.
12
management actively promotes behaviors that enable knowledge seekers to ask their questions to others without penalties for not knowing.
13
obtaining knowledge from fellow employees is routine and second nature.
14
sharing stories of success is encouraged.
15
is constantly assessing the extent to which employee’s knowledge is shared.
16
176
is constantly evaluating the possibilities to get the most knowledge out of its employees*.
EFT
SD
D
N
A
SA
IMP
SD
D
N
A
SA
EFT
SD
D
N
A
SA
IMP
SD
D
N
A
SA
EFT
SD
D
N
A
SA
IMP
SD
D
N
A
SA
EFT
SD
D
N
A
SA
IMP
SD
D
N
A
SA
EFT
SD
D
N
A
SA
IMP
SD
D
N
A
SA
EFT
SD
D
N
A
SA
IMP
SD
D
N
A
SA
EFT
SD
D
N
A
SA
IMP
SD
D
N
A
SA
EFT
SD
D
N
A
SA
IMP
SD
D
N
A
SA
EFT
SD
D
N
A
SA
Assessing Knowledge Management
Q#
(In) my organization, agency, or department:
17
technology is understood to be an enabler which ensures that the right knowledge gets to the right person at the right time*.
18
electronic networks for internal and externally knowledge dissemination are adequate*.
19
management actively promotes collaboration, teamwork and rotation of staff to spread best practices and ideas*.
20
management actively promotes informal networks such as communities of practice.
21
employees are actively engaged in informal networks such as communities of practice.
22
sharing knowledge with fellow employees in routine and second nature.
23
is constantly reviewing the extent to which best practices disseminate.
24
constantly measures weather the people who need the knowledge get it when they need it.
25
employs technology to track what knowledge is being used in the organization.
26
employs technology that makes the utilization of the knowledge resources transparent to all*.
27
is constantly tracking to ensure people who need knowledge get it when they need it*.
28
intellectual assets are recognized and valued in the organization.
29
employees do not distinguish between personal and corporate knowledge when it comes to utilizing knowledge resources for organization’s business.
30
does not discourage improvisation by employees related to business objectives*.
31
has defined responsibilities and a budget set for knowledge management.
32
has key performance measures of knowledge management.
IMP
SD
D
N
A
SA
EFT
SD
D
N
A
SA
IMP
SD
D
N
A
SA
EFT
SD
D
N
A
SA
IMP
SD
D
N
A
SA
EFT
SD
D
N
A
SA
IMP
SD
D
N
A
SA
EFT
SD
D
N
A
SA
IMP
SD
D
N
A
SA
EFT
SD
D
N
A
SA
IMP
SD
D
N
A
SA
EFT
SD
D
N
A
SA
IMP
SD
D
N
A
SA
EFT
SD
D
N
A
SA
IMP
SD
D
N
A
SA
EFT
SD
D
N
A
SA
IMP
SD
D
N
A
SA
EFT
SD
D
N
A
SA
IMP
SD
D
N
A
SA
EFT
SD
D
N
A
SA
IMP
SD
D
N
A
SA
EFT
SD
D
N
A
SA
IMP
SD
D
N
A
SA
EFT
SD
D
N
A
SA
IMP
SD
D
N
A
SA
EFT
SD
D
N
A
SA
IMP
SD
D
N
A
SA
EFT
SD
D
N
A
SA
IMP
SD
D
N
A
SA
EFT
SD
D
N
A
SA
IMP
SD
D
N
A
SA
EFT
SD
D
N
A
SA
Bold and asterisk (*): Final items retained for the KMI model
177
Assessing Knowledge Management
aPPendix b: organizational PerforMance (oP) (P1, P2, P3 … P16) Instructions: In this section, a statement is made concerning some aspects of the performance of your organization. Please consider your response for the period starting from when you think formal knowledge management or knowledge management-related activities were initiated in your organization. Your response should be as follows: SD-Strongly Disagree, D-Disagree, N-Not Sure, A-Agree, and SA-Strongly Agree. P#
My organization, agency, or department:
1
produces accurate, reliable, and thorough financial reports
2
communicates budgetary and financial data to citizens/customers
3
produces financial reports in a timely manner
4
accurately gauges the cost of delivering programs/services/products
5
Conducts strategic analysis of present and future human resource needs*
6
is able to facilitate timely and quality hiring as required*
7
has sophisticated professional development programs*
8
has meaningful reward and evaluation structures for staff
9
has sufficient data to support analysis and management requirements
10
effectively monitors and evaluates projects throughout implementation
11
identifies strategic objectives, and with provide a clear purpose
12
effectively communicates strategic objectives to all employees
13
is responsive to input from customer, stakeholders, and employees
14
develops indicators and evaluative data that can measure progress toward results and accomplishments
15
uses results data for decision-making and evaluation of progress
16
clearly communicates the results of its activities to stakeholders
Bold and asterisk (*): Final items retained for the HR-Capability component of OP.
178
IMP
SD
D
N
A
SA
EFT
SD
D
N
A
SA
IMP
SD
D
N
A
SA
EFT
SD
D
N
A
SA
IMP
SD
D
N
A
SA
EFT
SD
D
N
A
SA
IMP
SD
D
N
A
SA
EFT
SD
D
N
A
SA
IMP
SD
D
N
A
SA
EFT
SD
D
N
A
SA
IMP
SD
D
N
A
SA
EFT
SD
D
N
A
SA
IMP
SD
D
N
A
SA
EFT
SD
D
N
A
SA
IMP
SD
D
N
A
SA
EFT
SD
D
N
A
SA
IMP
SD
D
N
A
SA
EFT
SD
D
N
A
SA
IMP
SD
D
N
A
SA
EFT
SD
D
N
A
SA
IMP
SD
D
N
A
SA
EFT
SD
D
N
A
SA
IMP
SD
D
N
A
SA
EFT
SD
D
N
A
SA
IMP
SD
D
N
A
SA
EFT
SD
D
N
A
SA
IMP
SD
D
N
A
SA
EFT
SD
D
N
A
SA
IMP
SD
D
N
A
SA
EFT
SD
D
N
A
SA
IMP
SD
D
N
A
SA
EFT
SD
D
N
A
SA
179
Chapter 10
A Relational Based-View of Intellectual Capital in High-Tech Firms G. Martín De Castro Universidad Complutense de Madrid, Spain P. López Sáez Universidad Complutense de Madrid, Spain J.E. Navas López Universidad Complutense de Madrid, Spain M. Delgado-Verde Universidad Complutense de Madrid, Spain
abStract The Resource-Based View (RBV) has tried to test the role of strategic resources on sustained competitive advantage and superior performance. Although this theory has found several flaws in order to reach its objective effectively (Priem & Butler, 2001), recent proposals have suggested that these problems can be overcome (Peteraf & Barney, 2003). This solution requires paying a greater attention to the analysis of knowledge stocks, developing a mid-range theory: the Intellectual Capital-Based View (Reed, Lubatkin & Srinivasan, 2006). This mid-range and pracmatic theory allows the hypotheses development and empirical testing in a more effective way that the RBV. There is a certain degree of general agreement about the presence of human capital and organizational capital as the main components of intellectual capital, as well as about the fact that the configuration of knowledge stocks will vary from one industry and firm to another one. Taking these assumptions as a starting point, this paper explores the configuration of intellectual capital that can be empirically found on a sample of high-technology firms. Our findings highlight the importance of relational capital, which must be divided into business and alliance capital, so the strategic alliances play a relevance role in the type of firms that have been included in our research. DOI: 10.4018/978-1-60566-709-6.ch010
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
A Relational Based-View of Intellectual Capital in High-Tech Firms
introduction From the Resource-Based View (RBV), it is widely accepted that sustained competitive advantage and superior rents are closely tied to company ability to utilize and deploy its intangible resources and capabilities, or its knowledge stocks (Barney, 1991; Grant, 1996) or intellectual capital (Subramaniam & Yound, 2005). Nevertheless, the RBV suffers from various concerns (Priem & Butler, 2001): (i) it is not prescriptive; (ii) it is too general; (iii) and it lacks a clear definition of its key concepts, among other. These can be the reasons why there is so little effort in studying a conceptual and empirical test of it. To overcome some of these concerns, during the 90´s has arisen a pragmatic and focused framework, called Intellectual Capital-Based View (ICV) (Reed et al., 2006). As a mid-range theory, ICV should allow a better hypotheses development and empirical testing than a more generalize framework as the RBV. In this sense, there are several intellectual capital models that have been provided in the literature (Brooking, 1996; Kaplan & Norton, 1996; Edvinsson & Malone, 1997; Bueno, 1998; CIC, 2003; among others) to measure and conceptualize intellectual capital. However, it is necessary to improve previous proposals and empirically support models for the classification and measurement of intellectual capital. At this point, most of them, use three elements of intellectual capital: human capital, structural capital, and relational capital (Leitner, 2005), which are representing, in a wide sense, all expressions of firm’s knowledge stocks. In this way, it is tried to reconcile the concept of intellectual capital (CIC, 2003). This work is based on an empirical research in high-tech organizations since the dominant stream of the theoretical proposals of intellectual capital adopt the follow basic three components:
180
•
•
•
Human capital, which includes values and attitudes, aptitudes, abilities, experiences and know-how of employees to carry out different activities into the organization. Structural capital that contains both organizational and technological elements that pursue integration and coordination within the firm. In this sense, the structural capital is the whole of organizational methods and processes needed in order to obtain products and services as well as complete organizational tasks. And relational capital, which gathers the value of relationships maintained with external agents by a firm (close to business activity or through strategic alliances).
The empirical research, focused on high-tech firms, presents an interesting case for the study of different kinds of intangible or knowledge assets in knowledge-intensive firms (Leitner, 2005), and the aim of this paper is testing the previous models, and providing a configurative definition of intellectual capital from the different components that it comprises.
tHeoretical bacKground Knowledge assets -intellectual capital- as economic wealth have been accepted along the scientific literature as well as its useful application (Teece, 1998). Although studies about its identification, measurement and strategic assessment are limited because there are several problems implicated in that. These problems are examined by the models of intellectual capital, carrying out their measurement and identification of the different components that compose it. Furthermore, the importance of managing the intellectual capital in firms supposes a key point to perform a work like this. On the other hand, the definition of intellectual capital by Bueno (1998: 221): ‘basic competen-
A Relational Based-View of Intellectual Capital in High-Tech Firms
cies of intangible character that allow creating and maintaining competitive advantage’ argues how it can be tied the intellectual capital to the Resource-Based View (RBV). In this way, intellectual capital is used as a synonym for intangible or knowledge assets (Stewart, 1991). The different components of intellectual capital allow improving its assessment, as they symbolize diverse kinds of intangible resources and capabilities and it is simpler its analyses. However, in spite of their strategic nature, all of these assets would not have the same value for the firm as it seems to suggest the studies of Itami & Roehl (1987), Aaker (1989), Prahalad & Hamel (1990), or Hall (1992, 1993) that emphasize the importance of certain intangibles. Setting this kind of differences can be considered as a useful help for strategic management, since they can facilitate to make decisions about the actions that the firm should perform and about the implementation of programs that allow to protect, maintain or develop those more valuable intangible assets. Thus, it is required an understandable classification of intellectual capital in order to explore the relation between any specific kind of intellectual asset and competitive advantage. Nevertheless, there are numerous classifications about the different components of intellectual capital, as well as for establishing series of indicators for its measurement. In this way, according to most of the theoretical proposals, in a first step, three main components can be found: (i) human capital; (ii) structural capital; and (iii) customer or relational capital (Kaplan & Norton, 1992; Saint-Onge, 1996; Edvinsson & Malone, 1997; Sveiby, 1997; Bontis, 1998; Carson et al., 2004; Moon & Kym, 2006; Cabrita & Bontis, 2008; Kong, 2008). However, in a second step, it can be observed that there are various authors who take into account a major number of components for carring out a more detailed and deeper analysis of intellectual capital (Brooking, 1996; Roos & Roos, 1997; CIC, 2003; Leliaert et al., 2003; Pike et al., 2005;
Carlucci & Schiuma, 2007); trying differentiate issues with different nature in order to improve their examination. In this sense, with respect to structural capital, Brooking (1996) highlights the differences between intellectual property assets -focused on technological knowledge- and infrastructure assets -focus on organizational knowledge-. And regarding relational capital, Leliaert et al. (2003), distinguish between customer capital -assets related to clients-, and strategic alliance capital -assets with regard to relationships derived to alliance-; and Carlucci & Schiuma (2007) discern social capital -assets regarding networks of relationships among agents-, and stakeholder capital -assets related to relationships maintained with internal and external stakeholders-. Other models, as Intellectus Model (CIC, 2003) includes five components: (i) human capital (makes reference to the tacit or explicit knowledge which people possess, as well as their ability to generate it, which is useful for the mission of an organization and includes values and attitudes, aptitudes and know-how); (ii) technological capital (refers to the combination of knowledge directly linked to the development of activities and functions of technical system of an organization, responsible for obtaining products and services); (iii) organizational capital (as the combination of explicit and implicit, formal and informal knowledge which in an effective and efficient way structure and develop the organizational activity of a firm, that includes culture -implicit and informal knowledge-, structure -explicit and formal knowledge- and organizational learning – implicit and explicit, formal and informal renewal knowledge processes); (iv) business capital (refers to the value to an organization of the relationships that maintains with the main agents connected with its basic business processes -customers, suppliers, allies, etc.-); (v) and social capital (as the value to an organization of the relationships that maintains with other social agents and its surroundings).
181
A Relational Based-View of Intellectual Capital in High-Tech Firms
As it has been presented, structural capital was divided into technological and organizational capital, and relational capital was divided into business and social ones due to theirs heterogeneous nature, allowing a better understanding of these types of factors. The Intellectus Model (CIC, 2003) is a good example that theoretical proposals about intellectual capital are becoming more complex and detailed every day. This encourages analytical reflection among managers and Chief Knowledge Officers, but it can also be seen as a too extensive proliferation of criteria and categories of intangible assets. In this sense, empirical evidence is needed to determine the level of aggregation that intellectual capital components must adopt in practice. Thus, the aim of this work is to build blocks of an intellectual capital balance sheet, taking the three most common components of intellectual capital (human capital, structural capital, and relational capital) and testing empirically if this grouping of intangible assets is supported by the evidence obtained from a sample of knowledgeintensive firms. That is, this investigation will try to reply if there is an especific structure about intellectual capital in high-tech firms and if some of its components stand out, exploring the configuration of intellectual capital that can be empirically found on a set of high-technology firms from Boston’s Route 128 (MA, USA). In addition, Route 128 is one of the most important technological clusters, where companies maintain relationships with customers, suppliers and competitors, which are interesting in order to examine relational capital.
SaMPle and MetHod Taking into account the previously mentioned theoretical proposal, we empirically test the presented simple model of intellectual capital in knowledge-intensive firms. With this purpose, we have carried out a survey in firms operating within
182
NAICS 334 (Computer and Electronic Product Manufacturing), 516 (Internet Publishing and Broadcasting), 517 (Telecommunications) and 518 (Internet Service Providers, Web Search Portals, and Data Processing Services) from Boston´s Route 128 (Massachusetts- U.S.A.) during 2005. The selection of industries was guided by the purpose to have a homogeneous sample (Rouse & Daellenbach, 1999). From a population of 422 firms, finally 52 firms took part in our survey, so we reached a response rate of 12.32% (see Table 1 for a general description of the fieldwork). In our preparation of the questionnaire that we would use to collect quantitative data from primary and internal sources especially chosen for our research, we followed a process that can be divided into four phases: (1) literature review; (2) elaboration of the questionnaire in an initial version; (3) pre-testing the preliminary version of the survey; and (4) correcting and reframing the questionnaire in order to obtain a final version to be used in the fieldwork. The questionnaire employed for the survey included 12 items for measuring different intellectual capital aspects according to the three main constructs that it involves. 4 items were devoted to report human capital (HC), 3 addressed structural capital (SC), and 5 tried to analyze relational capital (RC). Firms had to answer in a seven positions Likert style scale, showing their level of agreement about the sentences presented in the survey. The 12 items employed in the questionnaire were taken from general insights about the pre-defined components of intellectual capital taken into account (see Table 2). The items were ungrouped in the questionnaire, and one of them was reversely written (‘our relations with suppliers are sporadic and punctual’). These facts granted attention and sense-making from the respondent (CEO). Assessing the intellectual capital in a homogeneous scale is not very easy to do, nevertheless, the survey allows to perform
A Relational Based-View of Intellectual Capital in High-Tech Firms
Table 1. Research resume Research focus Criteria defining sample
Sample Response rate
Knowledge Creation Processes Knowledge-intensive firms From industries NAICS 334, 516, 517 & 518 Placed on “Route 128” (Massachusetts, USA) 50 employees or bigger Included in CareerSearch Database 422 firms 52 firms (12.32%)
Method for data gathering
Survey
Process for data gathering
Ordinary mail Follow up on the phone Backup with second ordinary mail, FAX, webpage and e-mail
Statistical software used
SPSS 12.0 for Windows (version 12.0.1)
Table 2. Intellectual capital elements. Descriptive statistic Mean
Standard Deviation
HC2 - Our employees are among the most experienced in the industry
5.92
1.074
HC1 - Our employees develop new ideas and knowledge
5.81
1.049
HC4 - Our employees have a long experience in the firm
5.67
1.232
HC3 - Our employees do team work
5.67
1.098
RC5 - Our firm is recognized by the external agents (customers, suppliers, competitors, and the general public) as one of the best firms in the industry
5.61
1.297
RC2 - Our customers are highly loyal to our firm
5.35
1.341
RC4 - Our collaboration agreements are held during long periods of time
5.19
1.394
SC1 - Our efforts in creating and sustaining an organizational culture are among the highest in our industry
5.02
1.651
SC2 - Our firm develops more ideas and products than any other firm in our industry
4.75
1.671
SC3 - We perform a lot of actions to spread our corporate values and beliefs
3.96
1.703
RC3 - Our relations with suppliers are sporadic and punctual (R)
3.81 (R)
1.313
RC1 - Our firm devotes an important part of its budget to funding community and green actions
2.60
1.796
Questionnaire items
(R) Reversed item. Un-reversed mean would be 4.19. Standard deviation remains the same.
these comparison applying a same framework for the assessment from each respondent.
reSultS A factor analysis was developed in order to identify the main dimensions (Hair et al., 2004) of intellectual capital for these types of industries as well
as their main elements and variables, although in the following paragraphs, as a preliminary approach to the data analysis performed after data gathering, a comment on the descriptive statistics about the items of the questionnaire is provided. This analysis allows us to detect the most and less common aspects of intellectual capital that firms possess (see Table 2).
183
A Relational Based-View of Intellectual Capital in High-Tech Firms
As it can be seen, the items related to human capital show the higher means (close to 6 in a scale with 7 as the maximum value). This reports that firms operating in the chosen industries are highly focused on having a strong human capital. And these data are quite robust, as the low standard deviations (see Table 2). Almost every firm strongly values its human capital. Employees with high experience in the industry, ability to develop new ideas and knowledge, as well as experience within the firm and the involving in teamwork appear as key assets for competing in the analysed industries. The surveyed firms agree considerably (reduced standard deviations) about recognizing as next importance in the list of intellectual strengths and assets the renown among customers, suppliers, competitors and general public, the effective customer loyalty, and the long-lasting collaboration agreements sealed by the firm. All of these issues are tied to relational capital in the fashion of reputation-based and operationally-based relationships with the environment. The item ‘our relations with suppliers are sporadic and punctual’ (RC3) deserves special attention, placing it as an intermediate power asset. This is consistent with the literature, which confers less relevance to the relations with the suppliers compared with other external agents as customers or allies. This is backed by the obtained results, because the items devoted to these agents show higher values as firm strengths their relations with the suppliers. When firms assessed their intellectual capital positions, the issues tied to structural capital ranked among the less common element. Organizational culture emerges as the most employed element of internal coherence, but firms differ considerably among them about this issue (see the standard deviations, in Table 2). The effective flow of ideas and products delivered to the market is a slightly common asset, but we must take into account that it has been posed in industrial-competition terms. Finally, the relevance of actions for spread-
184
ing and reinforcing corporate values and beliefs differ considerably for each particular firm (see standard deviations, in Table 2). In order to end this preliminary descriptive analysis of our results, we must highlight that there are very few firms in the studied industries investing in community and green actions. Funding these actions was posed as an indicator for relational capital focused on community, social and green care agents. The average position in this kind of relation is actually low. After descriptive statistics, an exploratory factor analysis (Hair et al., 2004) was carried out in order to identify the factors or latent phenomena that lie in the data about intellectual capital provided by the studied firms. For deciding if factor analysis is an appropriate technique in this case, several preliminary tests are needed: the analysis of communalities, the Bartlett’s test, and the Kaiser-Meyer-Olkin. Table 3 shows the results of them for the set of items contained in the questionnaire employed in our research. As it can be seen in Table 3, the test advise to perform the factor analysis, the KMO index is above 0.6, so it can be considered acceptable for exploratory studies (as this one), and the factor analysis becomes appropriate. From the factor analysis we obtained four components of intellectual capital. Jointly they explained almost a 70% of the total variance contained in the original data (see Table 3). The first found component was labeled as “Human Capital” because it gathered all the items originally developed for measuring this construct, as well as one of the elements initially designed for relational capital. The five items included in this component explained the 25% of the total intellectual capital of a firm. The element that better characterizes “Human Capital” is the experience in the industry held by employees. Nevertheless, the experience in the firm also presents important factorial weight. Besides, this component of intellectual capital includes the abilities of the employ-
A Relational Based-View of Intellectual Capital in High-Tech Firms
Table 3. Rotated components matrix (a) Component Human Capital HC2
.836
HC3
.760
RC5
.739
HC1
.716
HC4
.527
Structural Capital
SC3
Alliance Capital
.448 .500 .892
RC1 SC1
Business Capital
.844 .446
.681
RC3
.821
SC2
.660
RC2
.507
RC4
.903
% variance
25.078
20.000
13.224
11.248
% acumul.
25.078
45.078
58.302
69.550
KMO index
0.618
Extraction method: Main components analysis Rotation method: Normalization Varimax with Kaiser (a) Rotation has converged after 5 iterations
ees for developing ideas and new knowledge, and for team-working, as well as the recognition as a leading firm by the external agents (see Table 3 for factorial loadings). The second component found in the factor analysis represents a 20% of the intellectual capital of a firm and includes three elements. The most important of them is the set of actions devoted to spread corporate values and beliefs. Due to the fact that this item was clearly representing structural capital, and because this component of intellectual capital includes two of the three items originally designed for structural capital it was named “Structural Capital”. The other two items that appear within this component are the investments on community and green initiatives, as well as the efforts that a firm makes for creating and sustaining its organizational culture. The third found component of intellectual capital weighted a 13% of the total variance con-
tained in the original data and it was shaped by three items. The strongest of them was representing the relations with suppliers, showing content clearly tied to relational capital. In this vein, this component also included the relations with customers. The factorial loadings of two relational capital items in this component, as well as the clear dominance of one of them led us to label it simply as “Relational Capital”, although it also contained one of the items originally designed for structural capital (see the composition of this component through the factorial loadings shown in Table 3). The last component of intellectual capital that provided the factor analysis was designated “Strategic Alliances” because it contained only one item, initially developed for measuring relational capital along with the collaboration agreements held by a firm. This component emerged as an own entity, representing the 11% of the intellectual
185
A Relational Based-View of Intellectual Capital in High-Tech Firms
Figure 1. Components of intellectual capital obtained from the empirical research
capital of a firm, which highlights the relevance that special partners can have for a firm of the analyzed industries.
diScuSSion According to the obtained data, the average balance sheet of intellectual capital that could be found in a firm of the knowledge-intensive industries of Computer and Electronic Product Manufacturing, Internet Publishing and Broadcasting, Telecommunications, and Internet Service Providers, Web Search Portals, and Data Processing Services operating in Boston´s Route 128 at the beginnings of 2005 would show something similar to Figure 1. In this configuration of intellectual capital, human capital appears as the most influential component. It includes the experience, creativity and teamwork of employees, but when a firm holds a strong position in these areas, an image of leading firm is projected towards external agents (customers, suppliers, competitors, and general public) present in the environmental setting. Thus, the quality of workforce seems to be the main indicator of leadership in the industry. Probably, due to the important knowledge-base of the studied industries, the role of key engineers or experts could determine that “the best people make the best firm”.
186
Structural capital represents almost a 30% of the total intellectual capital of a typical firm. The purpose of structural capital is to provide an appropriate context for communication, cooperation, adhesion and identity (Kogut & Zander, 1996). Issues related to organizational culture, values and beliefs are gathered within the label of structural capital, although we have found that investments on green care or community initiatives hold a strong relation to corporate culture and structural capital. This is nothing strange, because when a positive mission and values are stated for a company, probably the best way to legitimize them is with subsequent actions which reinforce the declared principles. Respect for the natural environment and the active involvement in the community life are two of the most common aspects that can be included in the documents about organizational mission, vision and values, and this explains the configuration obtained for structural capital. Nevertheless, one of the most appealing findings of this research has been the fact that relational capital did not appear as initially supposed. Although according to the literature we expected to find grouped all of relations with external agents (customers, suppliers, allies, competitors, …), two components of intellectual capital were found with regard to these issues: the one that we have named “Relational Capital”, which is divided into business and alliance capital.
A Relational Based-View of Intellectual Capital in High-Tech Firms
Our block of relational capital includes the relations with customers and suppliers, as well as the capability of a firm to deliver ideas and products in its industrial setting. Although this characteristic was originally planned as an indicator of structural capital, the development process of ideas and products appears intertwined with its industrial environment, involving external aspects because it has been written with a comparison to the rest of competitors of the firm. This way, the factor named relational capital represents the set of general relations that a firm holds in its industrial setting, taking into account the interconnections with customers, suppliers and competitors. These agents are very close to the business activities, and it can be compared easily to the concept of ‘business capital’ that can be found in other models (CIC, 2003). The rising of an independent relational component of intellectual capital for allies and partners of a firm points out that certain collaboration agreements deserve a special interest. The presence of strategic partners could make the management and nature of this component considerably different from the management of the rest of the relations with environmental agents. Although we have taken into account firms from different industries, or even from different sectors, there are common patterns about possible interactions with key partners. Thus, firms born in a certain industry can learn to operate in another one with the help of an appropriate ally, or simply form alliance networks (Kogut, 2000) to reinforce its competitive position. It is not strange to find a computer manufacturer partnering with a firm that develops and updates contents for manuals, or distributing its product with the web-searching software of other firm, or providing special reduced conditions for accessing the Internet through a specific company, which surely will need communication equipments for undertaking its operations. These are some examples of how strategic alliances can strengthen the competitive position in the own
industry, thanks to the ties with firms from other industries. This kind of alliances can be a key for the required and success specialized management, so that is what the results reveal when “Strategic Alliances” appear as an independent component of intellectual capital. Further research is needed in order to improve knowledge about any of these building blocks of intellectual capital, bridging the extant advances in the fields of human resource management, organization theory and design, supply chain management or collaborative agreements, with the literature of intellectual capital. With empirical researches as the presented one in this chapter, managers can discover the components of intellectual capital that can be found in their industry. Then, they should apply the strategies and advices already developed for other fields of management research in order to develop and strengthen each kind of capital. Research efforts are welcome: a) in analyzing the configuration of intellectual capital for different industries, building models from empirical findings, so theoretical proposals in the field could be supported or improved, and b) in providing guidance for practitioners in the complex process of reinforcing the intangible endowments of a firm, improving each of the different components of intellectual capital.
concluSion and future trendS We want to highlight the contribution of our research towards a “Relational-Based View” of Resource-Based View or Intellectual CapitalBased View. Furthermore, although several proposals about intellectual capital classification, identification and measurement can be found in the literature, this work provides an evidence-driven classification and configuration of intellectual capital in high-tech firms. In this sense, it is stressed the relational capital, as it represents a 35% of the intellectual capital
187
A Relational Based-View of Intellectual Capital in High-Tech Firms
of a firm, although the traditional concept has been divided into business and alliance capital. So that, the human capital it is as important as the relational capital, leaving a supporting role for structural capital. With respect to the presented empirical model, the classification of different components of intellectual capital obtained in this work (see Figure 1) is very similar to the traditionally treated in theoretical literature, where it is considered that the intellectual capital is shaping by three components. Nevertheless, our research highlights the alliance capital as key component due to its relevance in the industries of our sample, leaving the intellectual capital with four components, two of them with an internal nature and two more devoted to relating the firm with its environment. Therefore, regarding challenges in managing intellectual capital, managers should pay attention to the following points: (a) recruitment and improvement of human capital because it is the key of its intellectual capital; (b) structure for sustaining strategy, linking appropriately the different elements of human capital, and designing the attractive map of relationships and alliances needed for successfully running business; (c) environment and several relevance agents (as customers or suppliers) in order to develop those relationships; and (d) key partners for reaching an important influence on operative, service and financial return.
referenceS
Bontis, N. (1998). Intellectual Capital: An Exploratory Study that Develops Measures and Models. Management Decision, 36, 63–76. doi:10.1108/00251749810204142 Brooking, A. (1996). Intellectual Capital. Core Asset for the Third Millennium Enterprise. London: International Thomson Business Press. Bueno, E. (1998). El Capital Intangible como Clave Estratégica en la Competencia Actual. Boletín de Estudios Económicos, 53, 207–229. Cabrita, M. R., & Bontis, N. (2008). Intellectual Capital and Business Performance in the Portuguese Banking Industry. International Journal of Technology Management, 43(1-3), 212–237. doi:10.1504/IJTM.2008.019416 Carlucci, D., & Schiuma, G. (2007). Exploring Intellectual Capital Concept in Strategic Management Research. In Joia, L. A. (Ed.), Strategies for Information Technology and Intellectual Capital (pp. 10–28). Hershey, PA: Information Science Reference. Carson, E., Ranzijn, R., Winefield, A., & Marsden, H. (2004). Intellectual Capital. Mapping Employee and Work Group Attributes. Journal of Intellectual Capital, 5(3), 443–463. doi:10.1108/14691930410550390 CIC (2003). Modelo Intellectus: Medición y Gestión del Capital Intelectual (Serie Documentos Intellectus No. 5). Madrid: Centro de Investigación sobre la Sociedad del Conocimiento (CIC).
Aaker, D. (1989). Managing Assets and Skills: the Key to a Sustainable Competitive Advantage. California Management Review, 31, 91–106.
Edvinsson, L., & Malone, M. (1997). Intellectual Capital. Realizing your Company’s True Value by Findings its Hidden Brainpower. New York: Harper Collins Publishers, Inc.
Barney, J. B. (1991). Firm Resources and Sustained Competitive Advantage. Journal of Management, 17, 99–120. doi:10.1177/014920639101700108
Grant, R. M. (1996). Toward a Knowledge-Based Theory of the Firm. Strategic Management Journal, 17, 109–122.
188
A Relational Based-View of Intellectual Capital in High-Tech Firms
Hair, J. F. Jr, Anderson, R. E., Tatham, R. L., & Black, W. C. (2004). Análisis Multivariante. Madrid: Pearson-Prentice Hall.
Moon, Y. J., & Kym, H. G. (2006). A Model for the Value of Intellectual Capital. Canadian Journal of Administrative Sciences, 23(3), 253–269.
Hall, R. (1992). The Strategic Analysis of Intangible Resources. Strategic Management Journal, 13, 135–144. doi:10.1002/smj.4250130205
Peteraf, M. A., & Barney, J. B. (2003). Unraveling the Resource-Based Tangle. Managerial and Decision Economics, 24(4), 309–323. doi:10.1002/ mde.1126
Hall, R. (1993). A Framewok Linking Intangible Resources and Capabilities to Sustainable Competitive Advantage. Strategic Management Journal, 14, 607–618. doi:10.1002/smj.4250140804 Itami, H., & Roehl, T. (1987). Mobilizing Invisible Assets. Cambridge, MA: Harvard University Press. Kaplan, R., & Norton, D. (1992). The Balanced Scorecard – Measures that drive Performance. Harvard Business Review, 70, 71–79. Kogut, B. (2000). The Network as Knowledge: Generative Rules and Emergence of Structure. Strategic Management Journal, 21, 405–425. doi:10.1002/ (SICI)1097-0266(200003)21:3<405::AIDSMJ103>3.0.CO;2-5 Kogut, B., & Zander, U. (1996). What Firms Do? Coordination, Identity, and Learning. Organization Science, 7(5), 502–518. doi:10.1287/ orsc.7.5.502 Kong, E. (2008). The Development of Strategic Management in the Non-Profit Context: Intellectual Capital in Social Service Non-Profit Organizations. International Journal of Management Reviews, 10(3), 281–299. doi:10.1111/j.14682370.2007.00224.x Leitner, K. (2005). Managing and Reporting Intangible Assets in Research Technology Organisations. R & D Management, 35, 125–136. doi:10.1111/j.1467-9310.2005.00378.x Leliaert, P. J. C., Candries, W., & Tilmans, R. (2003). Identifiying and Managing IC: A New Classification. Journal of Intellectual Capital, 4(2), 202–214. doi:10.1108/14691930310472820
Pike, S., Göran, R., & Marr, B. (2005). Strategic Management of Intangible Asset and Value Drivers in R & D Organizations. R & D Management, 35(2), 111–124. doi:10.1111/j.14679310.2005.00377.x Prahalad, C., & Hamel, G. (1990). The Core Competence of the Corporation. Harvard Business Review, 90, 79–91. Priem, R. L., & Butler, J. E. (2001). Tautology in the Resourced-Based View and the Implications of Externally Determined Resource Value: Futher Comments. Academy of Management Review, 26, 57–66. doi:10.2307/259394 Reed, K. K., Lubatkin, M., & Srinivasan, N. (2006). Proposing and Testing an Intellectual Capital-Based View of the Firm. Journal of Management Studies, 43, 867–893. doi:10.1111/j.14676486.2006.00614.x Roos, G., & Roos, J. (1997). Measuring your Company’s Intellectual Performance. Long Range Planning, 30(3), 413–426. doi:10.1016/S00246301(97)90260-0 Rouse, M. J., & Daellenbach, U. S. (1999). Rethinking Research Methods for the ResourceBased Perspective: Isolating Sources of Sustainable Competitive Advantage. Strategic Management Journal, 20, 487–494. doi:10.1002/ (SICI)1097-0266(199905)20:5<487::AIDSMJ26>3.0.CO;2-K
189
A Relational Based-View of Intellectual Capital in High-Tech Firms
Saint-Onge, H. (1996). Tacit Knowledge. The Key to the Strategic Alignement of Intellectual Capital. Strategy and Leadership, 24, 10–14. doi:10.1108/eb054547 Stewart, T. (1991). Brainpower. Fortune, 123, 44–50. Subramaniam, M., & Youndt, M. A. (2005). The Influence of Intellectual Capital on the Types of Innovative Capabilities. Academy of Management Journal, 48, 450–463.
190
Sveiby, K. (1997). The New Organizational Wealth. San Francisco, CA: Berrett-Koeheler Publishers Inc. Teece, D. (1998). Capturing Value from Knowledge Assets: the New Economy, Markets for Know-how, and Intangible Assets. California Management Review, 40, 55–79.
Section 3
KM Strategies in Practice
192
Chapter 11
The Effect of Organizational Trust on the Success of Codification and Personalization KM Approaches Vincent M. Ribière Bangkok University, Thailand
abStract Knowledge Management (KM) initiatives are expanding across all types of organizations worldwide. However, not all of them are necessarily successful mainly due to an unfriendly organizational culture. Organizational trust is often mentioned as a critical factor facilitating knowledge sharing. For this research we took an empirical approach to validate this assumption. The purpose of this research is to explore the relationships between organizational trust, a knowledge management strategy (codification vs. personalization) and its level of success. This study was conducted among 97 US companies involved in knowledge management. A survey tool was developed and validated to assess the level of trust, the level of success and the dominant KM strategy deployed by an organization. Nine main research hypotheses and a conceptual model were tested. The findings show the impact of trust on the choice of the KM strategy as well as on the level of success.
introduction In 2001, the Journal of Management Information Systems (JMIS) had a special issue on knowledge management (KM). In their editorial, Davenport and Grover (2001), mentioned that a significant gap between KM theory and practice existed and that research in the domain seemed fragmented. Ten years later, we can say that the literature DOI: 10.4018/978-1-60566-709-6.ch011
and interests on KM have continued to grow but research remains fragmented and very few KM theories and frameworks have been generally developed and fully accepted. It seems like the multidisciplinary aspect of KM slows down the process of developing commonly accepted principles, models and theories. KM might be one of the few fields that requires various disciplines (Management, Information Sciences, Computer Science, Economy, Education, Psychology) to share and to develop common theories and it seems
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
The Effect of Organizational Trust on the Success of Codification and Personalization KM Approaches
that such integration remain a challenge. Earl (2001) created a taxonomy of schools of KM that describes and summarizes in three categories the different approaches/views of KM; Technocratic, Economic and Behavioral. KM has been a hot topic for more than fifteen years and organizations worldwide are still struggling to successfully implement it and to significantly benefit from it. Bain & Company conducted a study in 2007 regarding the global Management tools and trends (Rigby & Bilodeau, 2007). Knowledge Management was ranked in the top 10 list (7th position (tie)) in term of usage. Unfortunately it was also ranked in the bottom 5 for satisfaction in every survey for the past ten years! This fact illustrates that organizations are still struggling to fully take advantage of their KM investments. The context and business strategy of each company should be taken into consideration while defining a KM strategy. Becerra-Fernandez and Sabhervawal (2001) argue that a contingency perspective should be adopted in order for each unit to try to better understand the characteristics of their tasks which will consequently lead to selecting the KM processes that are more appropriated to them. This finding is aligned with the one from Alavi, Kayworth & Leidner (2005) who suggest that differences in culture values within firms might influence the choice, use and effectiveness of different KM enabling technologies. Markus (2001) also emphasizes the need to provide different types of knowledge repositories for different types of reusers. All these findings suggest the need to take a more micro approach to KM and to develop KM strategies that are more granular, flexible and customizable enough to meet every individual and groups’ needs. This research embraces a knowledge based view of the firm where the primary role of the firm is the integration of knowledge to create organizational capabilities and to gain a sustainable competitive advantage (M. Alavi & Leidner, 2001; Dinur, 2002; Grant, 1991). We went through different waves and tools of KM but what remains
at the center of managing knowledge is people. If people are not willing to share and acquire knowledge even the best IT tool will be inefficient. So in order to gain a sustainable competitive advantage the human aspect of KM and knowledge sharing behaviors must be better understood. Various studies and authors (Maryam Alavi, et al., 2005; M. Alavi & Leidner, 2001; Barth, 2000; Fahey & Prusak, 1997; Gold, Malhortra, & Segars, 2001; William R. King, 2006; William R. King, 2007; Knowledge Management Review, 2001; KPMG Consulting, 2000; Microsoft, 1999; Pauleen & Mason, 2002; Rigby & Bilodeau, 2007) report that organizational culture remains the main barrier to successful KM implementation. Corporate culture is a set of values, norms, symbols, guiding principles that enable and encourage people to involve into knowledge activities of knowledge generation, codification, storage, sharing and use behavior. Culture shapes assumptions about which knowledge is important, it mediates the relationship between organizational and individual knowledge, it creates a context for social interaction, it shapes processes for the creation and adoption of new knowledge (William R. King, 2007). It encourages knowledge creation by influencing employees to getting involved in learning activities in organization, it encourages employees to use information technology to codify and store knowledge in knowledge management systems, it encourages knowledge sharing by making it the norm of acceptable behavior and it stimulates knowledge use by influencing employees to constantly innovate and implement knowledge gained. Therefore corporate culture is needed to encourage all phases of the knowledge management cycle and to focus on tacit as well as explicit knowledge. Since tacit knowledge resides in employees, culture should support its creation and sharing through interaction, whereas for explicit knowledge culture should encourage employees to codify it, to enter it into knowledge management systems, and to take part into activities for its transfer. Positive culture can be the difference
193
The Effect of Organizational Trust on the Success of Codification and Personalization KM Approaches
between successful companies and those that fail. A study shows that only 10% of companies are successful at creating a high-performance culture (HR Focus, 2007). As King (2007) and Alavi, Kayworth & Leidner (2005) highlighted, few studies have investigated how some cultural values might be related to KM technology and practice use and KM outcomes. This empirical and exploratory study will contribute to fill this gap. Trust is often listed as one of the most important cultural value that facilitates knowledge sharing and which facilitates KM success (Maryam Alavi, et al., 2005; T. Davenport & Prusak, 1998; De Long & Fahey, 2000; Hinds & Pfeffer, 2003; Hubert, 2002; Kinsey Goman, 2002a, 2002b; Lee & Choi, 2003; Rao, 2002; Rolland & Chauvel, 2000; Von Krogh, 1998). Trust is getting more and more interests in organizations and the literature on the topic is also growing rapidly (Kramer, 2007; Schoorman, Mayer, & Davis, 2007). Unfortunately very few studies have attempted to measure the effect of Trust on KM initiatives (Renzl, 2008). This research will focus on this particular aspect.
In order to study this research question, the level of organizational trust is assessed through a questionnaire distributed to knowledge workers from different organizations involved in KM. Second, the types of KM tools and technology implemented and used in these organizations were evaluated. Finally, the level of success achieved was assessed. The next sections define these aforementioned variables.
“Trust is the one essential lubricant to any and all social activities. Allowing people to work and live together without generating a constant, wasteful flurry of conflict and negotiations” (Cohen & Prusak, 2001)
“Trust consists of a willingness to increase your vulnerability to another person whose behavior you cannot control, in a situation in which your potential benefit is much less than your potential loss if the other person abuses your vulnerability” (Zand, 1997).
reSearcH queStion and definition of Main reSearcH VariableS
“Belief that those on whom we depend will meet our expectations of them” (Shaw, 1997).
This study attempts to better understand how organizational trust affects the choice and use of KM tools and technology and the resulting success of the organization’s KM initiative, or lack thereof. Our main research question is as follows: Does the level of organizational trust influence the success of a KM initiative?
194
organizational trust Considerable research has been conducted concerning the concept of trust, both interpersonal trust and organizational trust. As with the concept of organizational culture, organizational trust has been defined somewhat differently in the literature by numerous authors (Carnevale & Wechsler, 1992; Culbert & McDonough, 1986; Griffin, 1967; Luhmann, 1979; Matthai, 1989; H. D. McKnight & Chervany, 2000). The definitions of trust are numerous and sometimes confusing mainly due to each discipline viewing trust from its own perspective. Two definitions of trust were selected:
Trust is often categorized in two forms (Levin, Cross, & Abrams, 2002a, 2002b; McAllister, 1995), cognition-based and affective-based trust. The cognition-based dimension of trust is associated with beliefs about competence, integrity, responsibility, credibility, reliability, and dependability. It is mainly task-oriented. The affectivebased dimension of trust is based on beliefs about reciprocated care and concern, benevolence,
The Effect of Organizational Trust on the Success of Codification and Personalization KM Approaches
altruism, commitment, and mutual respect. It is relationship-oriented. In organizational settings, the cognition-based form of trust is more central since it impacts more particularly reliability and dependability (Cook & Wall, 1980). This dimension of trust will be assessed and used for this study. In addition to the many definitions of trust, many tools have also been created to assess its level in an organization. Five trust factors defined by De Furia (1996, 1997) were determined to be most relevant to our research: (1) sharing relevant information; (2) reducing controls; (3) allowing mutual influences; (4) Clarifying mutual expectations; and, (5) Meeting expectations. These factors are described in more detail in the following section of this chapter. Very often people think that an organizational culture with a high level of sociability also implies a high level of trust. This is not always true. Consider the example of a parent-child relationship: you love your children but it does not imply that you trust them (e.g., you will not leave them by themselves). The opposite is also true: you might trust someone but might not necessarily like this person (e.g., an airplane pilot). One also needs to remember that trustworthiness takes a long time to build, and yet trust can be destroyed in an instant. These different examples show the complexity and fragility associated with trust. Trust is part of the social capital of an organization, even though in some particular cases its effect on knowledge sharing might be limited (Bakker, Leenders, Gabbay, Kratzer, & Van Engelen, 2006).
Knowledge Management Strategies and their associated tools and technologies Numerous publications present knowledge management practice/tool/technology frameworks. Among them, the knowledge management spectrum, presented by Binney (2001), offers a good overview of different KM tools and practices that are offered to organizations to better manage their
knowledge. The tools and practices are organized in six categories: transactional, analytical, asset management, process, developmental, and innovation and creation. Nevertheless, most of them are IT oriented, since IT is the main enabler for KM. Nevertheless, other KM practices that are not driven by IT must also be taken in consideration in order to fully understand the KM strategy of an organization. Two main KM strategies or approaches emerged: codification vs. personalization. (Hansen, Nohria, & Tierney, 1999) describe how different companies focus on different practices and strategies in order to manage their knowledge. Additional reasons for this particular categorization of KMS approaches are offered by Jennex and Olfman (2003). Dennis and Vessey (Dennis & Vessey, 2005) also used these two strategies as the bedrock for their three knowledge management systems: knowledge hierarchies (where knowledge is viewed as a formal organizational resource), knowledge markets (where knowledge is treated as an individual resource), and knowledge community (where knowledge is viewed as a communal resource).
The Codification Approach The first strategy identified by Hansen, et al. (1999) is called “codification”, which relies heavily on IT. One of the benefits of the codification approach is the reuse of knowledge. “Knowledge is codified and stored in databases, where it can be accessed and used easily by anyone in the company. Knowledge is codified using a peopleto-documents approach: it is extracted from the person who developed it, made independent of that person, and reused for various purposes” (Hansen, et al., 1999). It has been named and described differently by other authors: The cognitive network model (Swan, Newell, Scarbrough, & Hislop, 1999); The collecting dimension (Denning, 1998); The product view approach (Know-Net, 2000); The transformation model (Natarajan & Shekhar,
195
The Effect of Organizational Trust on the Success of Codification and Personalization KM Approaches
2000); Distributive applications (Zack & Michael, 1998); and, The document-centered approach and The technological approach (Wick, 2000). After a close analysis of these different portrayals, one can conclude that all of these descriptions and definitions are very similar and depict the same type of practices and tools (Ribière, 2001).
The Personalization Approach The personalization approach (Hansen, et al., 1999) focuses on developing networks for linking people so that tacit knowledge can be shared. It invests moderately in IT. This approach focuses on dialogue between individuals, not knowledge in a database. “Knowledge that has not been codified—and probably couldn’t be—is transferred in brainstorming sessions and one-on-one conversations” (Hansen, et al., 1999). An investment is made in building networks of people, where knowledge is shared not only face-to-face but also over the telephone, by email, and via videoconference. All the previously cited authors who defined the codification approach also came up with their own definition for this approach: The community networking model (Swan, et al., 1999); The connecting dimension (Denning, 1998); The process-centered approach (Know-Net, 2000); The independent model (Natarajan & Shekhar, 2000); The collaborative approach (Zack & Michael, 1998); and, Socio-organizational knowledge management (Wick, 2000).
KM initiative Success It is always difficult and open to controversy to define and measure “success”. Different metrics (qualitative and quantitative) can be used to measure success. For example, Jennex and Olfman (2004) offer a success model based upon the Delone and McLean (1992) IS Success Model and discussed four different models of KM success: (1) The Knowledge Value Chain (Bots & Bruiin, 2002); (2) the KM Success Model (2002); (3)
196
the KM Effectiveness Model (2002); and, (4) the KMS Success Model (2003). Four main indicators defined and used by Davenport et al. in their publication concerning “successful knowledge management projects” were adopted (T. Davenport, De Long, & C., 1998): 1.
2.
3.
4.
Growth in the volume of knowledge available since the KM initiative has been launched (e.g., number of documents available) Growth in the usage of knowledge available since the KM initiative has been launched (accesses to repositories, or the number of participants for discussion-oriented projects) The likelihood that the project would survive without the support of a particular individual or two, that is, the project is an organizational initiative, not an individual project Growth in the resources (e.g., people, money) attached to KM initiatives.
Success was measured based on two dimensions. Since the main purpose of a KMS is to facilitate the flow and dissemination of knowledge, an important dimension for success is the fact that different employees use the system. Success factors #1 and #2 were used to measure this dimension of success. The second dimension of success used is based on the “robustness” of the KM initiative. If KM is given the resources and if there is a clear commitment from senior management to make it happen, then robustness is a success factor. Success factors #3 and #4 were used to measure this second dimension of success. We believed that it would also be relevant to check if the expected benefits of the KM initiative were achieved and, if “yes”, to what degree. To do so, we used a questionnaire developed by KPMG (2000). Fifteen main benefits often expected after KM implementation were used (KPMG, 2000). Additional success factors could have been used such as the 12 KMS success factors presented by Jennex and Olfman (2004) but it was easier to work with a smaller number of core variables.
The Effect of Organizational Trust on the Success of Codification and Personalization KM Approaches
The average of all the success factors was used to obtain the success level score.
reSearcH HYPotHeSeS and ProPoSed concePtual Model research Hypothesis #1 As previously presented, organizational trust seems to be an important cultural factor influencing interaction and knowledge sharing between individuals. Nelson and Cooprider (Nelson & Cooprider, 1996) demonstrated a significant relationship between mutual trust and shared knowledge between IS groups and their line customers. Politis (Politis, 2003) also used a quantitative approach to demonstrate the relationship between trust and knowledge acquisition. His findings support that most interpersonal trust dimensions are positively related to the variable of knowledge acquisition. Despite these two researches very few studies have been conducted to demonstrate the direct relationship of trust on knowledge sharing. A lot of research focus on demonstrating the relationship between variables like; personal motivation, social capital, communication, … and knowledge sharing particularly on topics focusing on virtual teams and communities of practice (Teoh & Avvari, 2004). All these studies reinforce the importance of trust in individual interactions (face to face or assisted by technology). KM personalization approaches are based on practices and tools that support direct relations between individuals. If the level of trust in between employees is high we can expect more direct communication and more knowledge sharing. Our first hypothesis is based on this assumption: H1:The level of organizational trust positively influences the level of usage of KM personalization tools and practices.
research Hypothesis #2 What is the relationship between organizational trust and the usage of codification tools? We are now focusing on a human-technology relationship. The knowledge used has been codified and is available in an information system. The question becomes, does someone who doesn’t trust his/her colleagues will still use the knowledge they codified in the system or not? In fact this problem has 2 facets; trust in the system and trust in its content. We can think that if people don’t trust the system they are not going to use it, so they will not be able to get and use the knowledge available in it. This type of research concerns the field of the adoption of technologies and among the most used model we can mention the TAM model originally developed by Davis (Davis, 1989). The trust variable was originally not part of the TAM model but the numerous evolutions of the model as well as its customization to ecommerce applications made the trust variable appear as important additional component of the model (Bahmanziari, Pearson, & Crosby, 2003; D. H. McKnight, Choudhury, & Kacmar, 2002; H. D. McKnight & Chervany, 2000). Bock, Sabherwal and Qian (2008) developed and tested a model of knowledge repository success (KRS) including perceived KRS searchability, perceived KRS output quality, perceived usefulness and user satisfaction. They examined how three aspects of social context (extrinsic rewards, intrinsic rewards, and organizational trust) affected the dimensions of the KRS success. The model was tested on KM systems following a codification strategy. Their findings suggest to 1) develop organizational trust and 2) to facilitate intrinsic rewards for knowledge contribution partly through organizational trust. Now if we assume that a person does trust the system but doesn’t trust people who populated its content with knowledge artifact, what can happen? •
I don’t trust this person so I am not going to contact him/her directly to get their
197
The Effect of Organizational Trust on the Success of Codification and Personalization KM Approaches
•
knowledge but I have no problem accessing knowledge they shared in the system. The key is to acquire knowledge no matter how it was obtained. I don’t trust this person and I will not even trust what this person shared on the system.
These two scenarios reflect the two types of trust previously described (cognitive and affective (McAllister, 1995)). In the first scenario there is no affective trust between the 2 individuals but some cognitive trust. In the second scenario both types of trust are lost and knowledge acquisition will not occur. Based on the following discussion we postulated the following hypothesis: H2:The level of organizational trust positively influences the level of usage of KM codification tools and practices. We think that the level of organizational trust does influence the usage of KM codifications tools but we are also conscious that other dimensions present in the TAM model will also play a role in this relationship. Consequently we expect the relationship between trust and codification to be moderate (not too strong).
research Hypothesis #3 Early in the 1990s, Jack Welsh had already underlined the important role of trust: “Trust is enormously powerful in a corporation. People won’t do their best unless they believe they’ll be treated fairly--that there’s no cronyism and everybody has a real shot. The only way I know to create that kind of trust is by laying out your values and then walking the talk. You’ve got to do what you say you’ll do, consistently and over time” (Welch, 1993).
198
The early KM efforts conducted by Buckman laboratories have been coroneted with success and once again trust was mentioned as a critical component: “It is important to create a climate of continuity and trust so that we may have proactive knowledge sharing across time and space. Organizational culture must change from a state of hoarding knowledge to gain power to one of sharing knowledge to gain power” (as quoted in Davenport and Prusak, 1998). When the level or organizational trust is high people are more open to interact, to collaborate, to innovate, to take risks, and of course to share and acquire knowledge. This leads us to postulate the following hypothesis: H3:The level of organizational trust positively influences the success level of a KM initiative.
Hypothesis #4 The personalization approach is intended to facilitate the interaction and collaboration between individuals so they can share their tacit knowledge, solve problems more rapidly, make better decisions in a fastest way, grow intellectually, and be more creative. Very few studies have been conducted to assess the relationship between personalization approaches and the success of KM initiatives. Among them we can mention the research conducted by Delmonte and Aronso (Delmonte & Aronson, 2004) who demonstrated a significant relationship between social interaction and knowledge management system success. The trust factor is often mentioned in this study has been critical. Another study conducted by Choi and Lee (Choi & Lee, 2002) establishes the effect of four KM styles and their effect on corporate performance (based on benchmarking). Their results shows that companies adopting a “Dynamic style” (highly tacit and explicit oriented) are the most successful. Results of companies that are mainly “system-oriented” (focus on explicit knowledge)
The Effect of Organizational Trust on the Success of Codification and Personalization KM Approaches
or the ones which are “human-oriented” obtain similar scores (lower than the “dynamic” style). Based on these findings we postulated the following hypothesis: H4:The level of usage of KM personalization tools and practices positively influences the success level of a KM initiative.
Hypothesis #5 Based on Choi and Lee’s study (Choi & Lee, 2002) previously described it looks like both approaches (codification ad personalization) have a positive effect on the success of a KM initiative. Not everyone agrees with this idea. McDermott (McDermott, 1999) for instance clearly stated in a provocative paper titled “Why information technology inspired but cannot deliver knowledge management” that ICT can only carry the information that will be used for individual or group thinking which become source of knowledge. To leverage knowledge, thinking must be leveraged with appropriate information. For McDermott the solution resides in Communities of Practice (CoP) but he doesn’t deny the enabling effect of ICT in KM. We could not think about KM these days without the use of technology but as often mentioned its role needs to remain an enabler and not the center of a KM strategy. Our last research hypothesis is: H5:The level of usage of KM codification tools and practices positively influences the success level of a KM initiative.
Hypothesis #6 Lee and Choi (2003) studied the relationships between knowledge management enablers, processes and organizational performance. Their study, conducted among 63 major Korean companies,
demonstrated significant relationships between KM enablers → Knowledge creation processes → Organizational creativity → Organizational performance. Organizational performance was measured based on an adaptation of the balanced scorecard, where the company compares itself to its competitors using five factors. Wu (2008) conducted a longitudinal examination of 36 companies which won the MAKE award (Most Admired Knowledge Enterprises) to assess the relationship between KM performance and firm performance in terms of accounting and market measures. His findings show that KM performance is a predictor of superior bottom line performance. Anantatmula (2007) conducted a survey to link KM effectiveness attributes to organizational performance. All the selected key attributes (similar to ours) confirmed to have an effect on improving organizational performance. An extensive literature exist on this topic, Chen and Chen (2005) conducted a review of survey research in knowledge management performance measurement between 1995-2004 and grouped them in eight categories; qualitative analysis, quantitative analysis, financial indicator analysis, non-financial indicator analysis, internal performance analysis, external performance analysis, project-oriented analysis, and organizational-oriented analysis together with their measurement matrices for different research and problem domains. Following this classification we could state that our organizational benefits assessment tools fits into the internal performance analysis category. Other KM performance classifications can be used, for example Dudezert (2006), based on an extensive literature review, defined two categories; macro-organizational (composed of competitive performance of KM and of the financial performance of KM) and a micro-organizational approach to KM evaluation (composed of process-based approach and of a systemic approach to the performance of KM). In our research we consider KM as being a process used to identify, capture, store, share and transfer knowledge in an organization in order to support
199
The Effect of Organizational Trust on the Success of Codification and Personalization KM Approaches
its core business processes and in alignment with its business strategy. Since KM is about improving business processes by better managing the knowledge flows around them, its resulting impact should be directly visible at the organizational level. Its impact might be more or less visible depending on the effectiveness of the KM initiative but it should be present to some extent. This leads us to postulate the following hypothesis: H6:The success level of a KM initiative positively influences organizational benefits
Hypotheses #7 and #8 Keskin (2005) conducted a study among 128 Turkish SMEs and found that the codification approach had a direct impact on firm performance. Based on his findings the impact of the codification approach was greater on performance than the personalization one. Schulz (2001) conducted a study among 98 subsidiaries of multinational corporations based in the US and in Denmark and found that companies that used a focused approach to codification or personalization will have positive effect on performance. Schulz argues that a focused codification approach will have a stronger impact on performance than a focused personalization approach. A focused approach is defined as KM strategies that regulate knowledge flows by controlling the degree to which knowledge is encoded in forms that match the information intensity and ambiguity of their knowledge (Schulz & Jobe, 2001). Zack (1999) also argues that the nature of the benefits gained from managing explicit knowledge depends on the type of application. Based on these findings we postulated the following two hypotheses: H7:The level of usage of KM personalization technologies and practices positively influences organizational benefits.
200
H8:The level of usage of KM codification technologies and practices positively influences organizational benefits.
Hypothesis #9 Assessing the impact of trust on organizational performance is a difficult task and very few researches have been conducted to validate this relationship. Among them we could mention the work of Sako (2006) who argues that performance factors can be classified in three categories; reducing transaction costs, investment with future returns and continuous improvement and learning. She used a sample of 1,415 responses from first-tier component suppliers in the automotive industry in Japan, the USA, and Europe and asked respondents to evaluate how much trust they could place on their customers. Three types of trust were used to validate their relation with business performance; goodwill trust, contractual trust and competence trust. Goodwill trust was estimated to have the stronger influence on business performance. Tam and Lin (2009) demonstrated that the positive relation between trust in coworkers and performance is fully mediated by trust in their organization. De Furia (1997) argues that the benefits of high trust include; Stimulates innovation, leads to greater emotional stability, facilitates acceptance and openness of expression and encourages risk taking. Therefore, we proposed: H9: The level of organizational trust positively influences organizational benefits.
research Model The five previous hypotheses served as foundation of the following model (Figure 1).
The Effect of Organizational Trust on the Success of Codification and Personalization KM Approaches
Figure 1. Research model
reSearcH MetHodologY assessment of Variables A survey tool (a questionnaire) was developed in order to assess: • • • •
The level of organizational trust The level of use of different KM tools and technologies deployed in each organization The perceived success of the KM initiative. Organizational benefits
assessing organizational trust The selected tool, the Organizational Trust Survey (OTS), was developed and validated by De Furia (De Furia, 1996, 1997) where trustworthiness (TW) is based on five behaviors: TW = SI + RC + AI + CE + ME Sharing relevant information (SI) refers to the behaviors whereby one individual transmits information to another person. Reducing controls (RC) refers to the behaviors affecting the processes, procedures or activities with which one individual (1) establishes the per-
formance criteria or rules for others, (2) monitors the performance of another person, (3) adjusts the conditions under which performance is achieved, or (4) adjusts the consequences of performance (i.e., positive or negative reinforcements). Allowing for mutual influences (AI) occurs when one person makes a decision that affects both individuals. Mutual influence means that both individuals have approximately equal numbers of occurrences of convincing the other or making the decision for both individuals. Clarifying mutual expectations (CE) refers to those behaviors wherein one person clarifies what is expected of both parties in the relationship. It involves sharing information about mutual performance expectations. Meeting expectations (ME) involves any behaviors in which one individual fulfills the behavioral expectations of another person. It is closely related to confidence, reliability and predictability. The OTS allows organizations to measure the trust-related behaviors of various categories of people within the organization― upper managers, first line supervisors, and coworkers― in relation to how employees’ trust-related expectations are being met. It also measures trust-related behaviors between organizational units and the perceived
201
The Effect of Organizational Trust on the Success of Codification and Personalization KM Approaches
impacts of organizational policies and values on trust-related behaviors. This tool is based on 50 questions (10 questions for each of the 5 factors). We used this tool because it measures different dimensions of trust at different levels of an organization and also because the OTS has the advantage to be easy to administer with a limited number of questions and it had been previously tested and validated.
organizational expected and achieved benefits. Respondents were asked to assess on a five point Likert scale to what degree they believed that the following statements corresponded to the current success status of their organizational KM initiative.
assessing the use of KM tools and technologies
•
For this section of the questionnaire an assessment tool was developed. The most common tools and technologies used for knowledge management initiatives were listed, based on a literature review. These technologies cover the six categories of the knowledge management spectrum, presented by Binney (2001). Respondents were asked to list the KM tools and technologies used at the organizational level (cf. Table 2). A sense of the degree of use or utilization ranging from “most used” to “least used” was employed to enrich this insight. It might be argued that some of the personalization tools, e.g., corporate yellow pages, in fact are examples of codified knowledge, the critical delineator is how the tools are used in practice. For example, the crucial fact about corporate yellow pages is not that it is a knowledge repository, but that employees use it to connect to experts. At the time of the data collection social networking tools were not yet popular, but future research should include them. Their classification in the codification/ Personalization scheme might be difficult since they fit in both categories, even though their initial intend is to network (socialization).
KM initiative’s Success and organizational benefits Four items were used to assess the level of KM success and 15 items were used to assess the
202
•
•
•
I have noticed a significant growth in the volume of knowledge available since the KM initiative has been launched (number of documents available). I have noticed a significant growth in the usage of knowledge available since the KM initiative has been launched (accesses to repositories and number of participants for discussion-oriented projects) I believe that the project would survive without the support of a particular individual or two I believe that resources (e.g., people, money) attached to KM initiatives are going to grow
Regarding the 15 KM benefits (as shown in Table 1) expected and achieved the respondents were asked to assess on a five point Likert scale to what degree they believed that the benefits were achieved (only if expected). As mentioned earlier in this paper four main indicators were used to assess the level of success as well as 15 expected benefits. Respondents were asked to assess on a five point Likert scale to what degree they believed that the following statements corresponded to the current success status of their organizational KM initiative. •
•
I have noticed a significant growth in the volume of knowledge available since the KM initiative has been launched (number of documents available). I have noticed a significant growth in the usage of knowledge available since the KM initiative has been launched (accesses to repositories and number of participants for discussion-oriented projects)
The Effect of Organizational Trust on the Success of Codification and Personalization KM Approaches
Table 1. Fifteen common KM benefits
Table 3. Results of Cronbach alpha test
Better decision making
Sharing best practice
Construct (number of items remaining)
Better customer handling
Reduced costs
Organizational Trust (24)
0.94
Faster response to key business issues
New ways of working
Codification (7)
0.801
Improved employee skills
Increased market share
Personalization (7)
0.827
Improved productivity
Create additional business opportunities
KM Success (4)
0.708
Increased profits
Improved new product development
Increased innovation
Staff attraction / retention
Organizational benefits (15)
•
Increased share price
Table 2. Codification and Personalization KM Tools and Practices KM Tools and Technologies Email & Listserv Corporate Intranet – Extranet – Internet Database Management Systems Search Engines - Intelligent Agents Data Warehouses – Data Marts Codification
Web-based training – e-learning Help-desk applications DMS Multimedia repositories DSS and Expert Systems Data mining- Knowledge Discovery Knowledge Mapping Expertise locators – Corporate Yellow pages – Who’s who Communities of Practice (interests in the same topic, field) Communities of Purpose (project, task oriented) Groupware
Personalization
Teleconferencing (shared applications, whiteboards) Best practices repository Videoconferencing (using audio and/or video) Mentoring - Tutoring Story Telling Desktop computer conferencing
•
α
Not applicable
I believe that the project would survive without the support of a particular individual or two I believe that resources (e.g., people, money) attached to KM initiatives are going to grow
Regarding the 15 KM benefits (Table 1) expected and achieved, the respondents were asked to assess on a five point Likert scale to what degree they believed that the benefits were achieved (only if expected).
Validity and reliability of the Survey instrument Due to the space limitation of this publication, we will only provide a summarized version of the results of the different tests that were conducted to verify the level of validity and reliability of our instrument (Ribière, 2005). In order to test the internal validity of the different dimensions assessed we performed a Cronbach alpha test (Table 3). The results demonstrate an acceptable level of internal validity. Some items were removed from the instrument due to their low level of correlation with the other items composing the construct. A factor analysis was conducted to test the validity of each construct. For the codification and personalization constructs, some items had to be removed due to their low loading on the factors. For the other constructs all the items were retained. Overall, we consider that the levels of
Online Chat & Instant Messaging
203
The Effect of Organizational Trust on the Success of Codification and Personalization KM Approaches
Figure 2. Path analysis diagram
validity and of reliability of the assessment tool were acceptable.
data collection and analysis Data were collected through two main mechanisms. An online version of the questionnaire posted on the Web as well as a paper version were used. Most of the responses received (98%) came from the online version. The target population was Chief Knowledge Officers (CKOs), managers, and other employees involved in knowledge management initiatives at any level in an organization. A total of 1050 emails, asking for participation, were sent out to targeted people involved with KM (members of KM groups and associations). A total of 129 responses were received. This represents a response rate of 12%. A fundamental premise of the research was that targeted organizations must have had experience with KM initiatives. Of the 129 questionnaires received only 97 were complete and were representative of organizations involved in KM. Organizations which participated were predominantly (68%) large organizations (>1,000
204
employees) and were in the fields of ICT-telecommunications (27%), consulting (23%) as well as agencies of the US Federal Government (23%). Respondents’ organizations were mainly (61%) service-oriented offering both standardized and customized products/services (64%). A large portion of the respondents hold an executive/ managing/director position (59%).
Model Validation A path analysis using structural equation modeling techniques was performed to test our model. The test was performed using the “CALIS” procedure of the statistical software SAS. This procedure uses parameter estimation based on maximum likelihood. The path diagram is presented on Figure 2. The goodness of fit indexes are presented on Table 4. The value of the Chi square listed on this table represents the null hypothesis test that the covariance matrix generated based on the data collected has the same structure as our theoretical model, meaning that the model fits our data.
The Effect of Organizational Trust on the Success of Codification and Personalization KM Approaches
Table 4. Goodness of fit indexes for final model Goodness-of-Fit Index Chi2 Degrees of freedom
Values
For a “good” model the value should be:
11.70
The smallest as possible
6
Prob>Chi2
0.07
As high as possible > .05
Comparative Fit Index (CFI) Bentler
0.96
Greater than 0.90
McDonald’s Measure of Centrality
0.96
Greater than 0.90
Non-Normed Fit Index (NNFI) Bentler & Bonett
0.91
Greater than 0.90
Normed Fit Index (NFI) Bentler & Bonett
0.93
Greater than 0.90
Other indicators of fit are presented on the same table. The model presented can be considered as acceptable based on the values of the fit indexes to obtain a “good model” (Hatcher, 1994).
Main findingS Most of the coefficients on the model are highly significant. Among the most significant coefficient we can mention the one between the “success of the KM approach” and the “organizational benefits” (0.72) with a high prediction level (R2=0.69). This finding demonstrates the positive impact that a KM initiative can have on an organization in term of reaching its business objectives. This finding reinforces the fact that a KM strategy should be closely aligned with the business strategy of an organization to bring the most value. The level of organization trust impacts almost equally the use of personalization and codification approaches (H1 and H2). As explained in the definition of the research hypotheses we originally expected the influence of trust to be higher on personalization than on codification but it seems that organizational trust does impact both almost equally. Nevertheless, the trust factor seems to be a better predictor of personalization usage (R2=0.23) than of codification usage (R2=0.16). Trust then becomes a critical cultural element for organizations who want to engage in any type of KM initiatives. This fact is also reinforced with
the direct significant relationship between the level of organization trust and the success level of the KM initiative. It shows that even if technology is not or moderately used, trust will contribute to the success level of the KM initiative and indirectly will benefit the organization as whole. As previously stated trust facilitates the relationships between people, their social interaction and their predisposition to share knowledge. Other factors (not included in this model) will affect the usage of KM technologies. For instance a framework labeled Requirements of Acceptance Model (RAM) was formulated by Ericsson and Avdic (2003). In their model the acceptance of knowledge management systems is a function of perceived relevance, system accessibility and management support. The model previously described of Bock, Sabherwal and Qian (2008) is also source of valuable findings. The control variable “rewarding knowledge sharing” was surprisingly only significant when applied to the personalization construct and not on the codification construct. Most companies currently reward people to codify their knowledge and/or to get it from the knowledge repository. Employees usually don’t like to document things and the quality of the resulting codified knowledge is often low because of that. Rewarding people to socialize and to share their knowledge through people to people interactions (personalization) seems to have a greater impact and might be “more fun” and rewarding. Mentoring for instance
205
The Effect of Organizational Trust on the Success of Codification and Personalization KM Approaches
is a great way to pass tacit knowledge to junior employees. It requires experts and/or senior employees to dedicate a large amount of time to explain their acts, decisions, behaviors, approaches … to junior employees on daily activities. Appropriated rewards should be given in exchange of such service. The recent emphasis on Web 2.0 (social networking) tools seems to validate this trend of encouraging people to “connect”. The usage level of personalization tools has a statistically significant impact (0.18) on the organizational benefits of a company (H7). This relationship was not significant between the usage level of codification tools and the organizational benefits variable (H8). Having employees to interact, collaborate and share seems to provide more benefits to a company that people simply using IT system to codify and acquire knowledge. These findings are aligned with McDermott’s vision (1999) expressed in a provocative paper titled “Why information technology inspired but cannot deliver knowledge management” that ICT can only carry the information that will be used for individual or group thinking which become source of knowledge. To leverage knowledge, thinking must be leveraged with appropriate information. For McDermott the solution resides in Communities of Practice (CoP) but he does not deny the enabling effect of ICT in KM. We could not think about KM these days without the use of technology but as often mentioned its role needs to remain an enabler and not the center of a KM strategy. These findings are also aligned with the study conducted by Bayyavarapu (2005) where 80 Canadian organizations were used to assess the impact of KM strategies to firm performance. He defined three main KM strategies; IT centered strategy, Capture-based strategy and learning KM strategy. Two types of performance, short term and long term were used. Bayyavarapu argues that IT-centered KM strategy in isolation yield neither short term performance nor long term performance benefits. Capture based KM strategies yield short term performance and learning based KM strate-
206
gies yield long term performance. These three strategies are complementary and yield better performance benefits when used simultaneously. To our surprise the level of organizational trust did not have a significant impact on organizational benefits (H9). The effect of organizational trust in our model might be indirectly affecting organizational benefits through the different KM variables composing our model. The concept of mediating variable was not tested. This finding is aligned with the research conducted by Zaheer, McEvily and Perrone (1998) that showed that interpersonal trust did not have a significant direct impact on performance. The control variable “rewarding knowledge sharing” was surprisingly only significant when applied to the personalization construct and not on the codification construct. Most companies currently reward people to codify their knowledge and/or to get it from the knowledge repository. Employees usually don’t like to document things and the quality of the resulting codified knowledge is often low because of that. Rewarding people to socialize and to share their knowledge through people to people interactions (personalization) seems to have a greater impact and might be “more fun” and rewarding (Earl, 2001). Mentoring for instance is a great way to transfer tacit knowledge to junior employees (Swap, Leonard, Shields, & Abrams, 2001). It requires experts and/or senior employees to dedicate a large amount of time to explain their acts, decisions, behaviors, approaches … to junior employees on daily activities. Appropriated rewards should be given in exchange of such service. The recent emphasis on Web 2.0 (social networking) tools seems to validate this trend of encouraging people to “connect”. The usage level of personalization (H4) and codification tools (H5) both have a significant impact on the success level of a KM initiative. The impact of the personalization tool usage factor (0.28) is slightly higher than the codification one (0.24) but not different enough to draw any conclusion.
The Effect of Organizational Trust on the Success of Codification and Personalization KM Approaches
future trendS It is clear that more research need to be conducted in order to fully understand the impact of the trust factor on the use of KM practices and tools. This research used a quantitative approach and we will suggest complementing it by a qualitative approach to better understand the motivations behind the trusting and non-trusting behaviors. This research was only conducted with US companies and it will be valuable to test such model in other countries to assess the impact of national culture/traits on the willingness to trust and to share knowledge. The new strong emphasis on social networking tool might be a new way to start building trust in between individuals. Research in this new direction might although be fruitful.
concluSion Very few quantitative studies had been conducted to demonstrate and quantify the influence of organizational trust on the usage level of various KM approaches as well as on the success of a KM initiative and on the emerging benefits for organizations. This initial study is a first attempt to do so. The theoretical model presented has an acceptable fit with the data collected but will greatly benefit from further validations with larger data sets and with more diversity in term of industries represented. The preliminary theoretical and practical findings of this research show that organizational trust plays an important role in the success of KM initiatives and in the usage level of personalization and codification technologies (which is not always obvious for the latest). The level of KM initiative success demonstrated to have a strong and direct impact on organizational benefits. Organizations with a high level of trust were more likely to be successful in their KM initiatives and the choice of a KM dominant strategy (codification, personalization, or balanced) that leaded to
success seemed to follow a contingency approach. In term of technology usage, it looked like simple tools as emails, intranet applications and database management systems remain the most used in term of codification tools and expertise locators and communities of practices and interests for personalization tools. One has to be very cautious about this last finding since, as Alavi, Kayworth and Leidner (Maryam Alavi, et al., 2005) mentioned, one cannot expect uniformity in how groups will use KM tools since their respective cultural values might influence their choice and needs. One of the practical implications of our preliminary findings is that companies should assess their level of trust at the organizational level and at the unit level in order to better define a successful KM strategy(ies) since, for instance, the adoption of socialization tools will not be likely to be high if the level of trust is low. This study could not fully demonstrate the strong value that personalization tools and practices could bring to the success of a KM initiative and to companies resulting benefits but we believe that their impact might be significant if the organizational culture embraces knowledge sharing behaviors. Not all organizations have yet realized the beneficial influence that trust could bring to their environment and the impact it could have on facilitating knowledge sharing, knowledge re-use and the creation of new knowledge. When present, trust is part of the social capital of an organization, even though, in some very particular circumstances, this statement might not be validated (Bakker, et al., 2006). A culture and/or leadership change will often be required for organizations to increase their level of trust. Williams (2004) provides a list of factors on how to build or repair trust; integrity, reliability, fairness, caring, openness, competence, loyalty, invest in employees, promote open communication, behave in an ethical and socially responsible manner, provide job security. Other authors like Schoorman, Mayer and Davis (2007) summarize these various factors in three main dimensions; Ability, Benevolence and Integrity
207
The Effect of Organizational Trust on the Success of Codification and Personalization KM Approaches
and Blomqvist, and Ståhle (2000) group them in term of ; competence, goodwill and behavior. Finally, Galford and Drapeau (2003) provide a good set of simple practices that can help to fight the enemies of trust.
referenceS Alavi, M., Kayworth, T. R., & Leidner, D. E. (2005). An Empirical Examination of the Influence of Organizational Culture on Knowledge Management Practices. Journal of Management Information Systems, 22(3), 191–224. doi:10.2753/ MIS0742-1222220307 Alavi, M., & Leidner, D. E. (2001). Review: Knowledge Management and Knowledge Management Systems: Conceptual Foundations and Research Issues. MIS Quaterly, 25, 107–136. doi:10.2307/3250961 Anantatmula, V. S. P. (2007). Linking KM effectiveness attributes to organizational performance. VINE: The Journal of information and knowledge management systems, 37(2), 133-149. Bahmanziari, T., Pearson, J. M., & Crosby, L. (2003). Is trust important in technology adoption? A policy capturing approach. Journal of Computer Information Systems, 43(4), 46–54. Bakker, M., Leenders, R. T. A. J., Gabbay, S. M., Kratzer, J., & Van Engelen, J. M. L. (2006). Is trust really social capital? Knowledge sharing in product development projects. The Learning Organization, 13(6), 594–605. doi:10.1108/09696470610705479 Barth, S. (2000, October). KM Horror Stories. Knowledge Management Magazine, 3, 37–40. Bayyavarapu, H. B. (2005). Knowledge management strategies and firm performance. London, Ontario: The University of Western Ontario.
208
Becerra-Fernandez, I., & Sabherwal, R. (2001). Organizational Knowledge Management: A contingency perspective. Journal of Management Information Systems, 18(1), 23–55. Binney, D. (2001). The knowledge management spectrum - understanding the KM landscape. Journal of Knowledge Management, 5(1), 33–42. doi:10.1108/13673270110384383 Blomqvist, K., & Ståhle, P. (2000). Building organizational trust. Paper presented at the 16th IMP-conference. from http://www.impgroup.org/ paper_view.php?viewPaper=37 Bock, G.-W., Sahbherwal, R., & Qian, Z. (2008). The effect of social context on the success of knowledge repository systems. IEEE Transactions on Engineering Management, 55(4), 536–551. doi:10.1109/TEM.2008.927824 Bots, P. W. G., & Bruiin, h. (2002). Effective Knowledge Management in Professional Organizations: Going by the rules. Paper presented at the 35th Hawaii International Conference on System Sciences. Carnevale, D. G., & Wechsler, B. (1992). Trust in the public sector. Administration & Society, 23, 471–494. doi:10.1177/009539979202300404 Chen, A.-P., & Chen, M.-Y. (2005). A review of survey research in knowledge management performance measurement: 1995-2004. Paper presented at the I-KNOW 05. Choi, B., & Lee, H. (2002). An empirical investigation of KM styles and their effect on corporate performance. Information & Management, 40(5), 403–417. doi:10.1016/S0378-7206(02)00060-5 Cohen, D., & Prusak, L. (2001). Good Company. How Social Capital Makes Organizations Work. Harvard Business School Press. KPMG Consulting (2000). Knowledge Management Research Report.
The Effect of Organizational Trust on the Success of Codification and Personalization KM Approaches
Cook, J., & Wall, T. (1980). New work attitude measures of trust, organizational commitment and personal need non-fulfillment. Journal of Occupational Psychology, 53, 39–52.
Denning, S. (1998). What is knowledge management? from http://www.stevedenning.com/ Find_what_is_km.html
Culbert, S. A., & McDonough, J. J. (1986). The politics of trust and organizational empowerment. Public Administration Quaterly, 10, 171–188.
Dennis, A. R., & Vessey, I. (2005). Three knowledge management strategies: knowledge hierarchies, knowledge markets, and knowledge communities. MIS Quaterly Executive, 4(4), 399–412.
Davenport, T., & De Long, D. W., & C., B. M. (1998). Successful Knowledge Management Projects. Sloan Management Review, 39(2), 43–57.
Dinur, A. (2002). Intrafirm knowledge transfers in multinational corporations: Considering critical context. Temple University.
Davenport, T., & Prusak, L. (1998). Working Knowledge. How organizations manage what they know. Harvard Business School Press.
Dudezert, A. (2006). Approaches and Methods for Valuing Knowledge Management Performance. In Boughzala, I., & Ermine, J.-L. (Eds.), Trends In Enterprise Knowledge Management. International Scientific and Technical Encyclopedia. doi:10.1002/9780470612132.ch6
Davenport, T. H., & Grover, V. (2001). Special issue: Knowledge management. Journal of Management Information Systems, 18(1), 3–4. Davis, F. D. (1989). Perceived Usefulness, Perceived Ease Of Use, And User Acceptance of Information Technology. MIS Quaterly, 13(3), 319–341. doi:10.2307/249008 De Furia, G. L. (1996). A Behavioral Model of Interpersonal Trust. Unpublished Doctoral dissertation, St. John’s University, Springfield, LA. De Furia, G. L. (1997). Facilitator’s guide to the interpersonal trust surveys. Pfeiffer & Co. De Long, D. W., & Fahey, L. (2000). Diagnosing cultural barriers to knowledge management. The Academy of Management Executive, 14(4), 113–127. Delmonte, A. J., & Aronson, J. E. (2004). The Relationship Between Social Interaction And Knowledge Management System Success. Journal of Knowledge Management Practice, 5. Delone, W. H., & McLean, E. R. (1992). Information Systems Success: The Quest for the Dependent Variable. Information Systems Research, 3, 60–95. doi:10.1287/isre.3.1.60
Earl, M. (2001). Knowledge Management Strategies: Toward a Taxonomy. Journal of Management Information Systems, 18(1), 215–233. Ericsson, F., & Avdic, A. (2003). Knowledge Management Systems Acceptance. In Coakes, E. (Ed.), Knowledge Management: Current Issues and Challenges (pp. 39–51). Hershey, PA: IRM Press. Fahey, L., & Prusak, L. (1997). The eleven deadliest sins of Knowledge Management. California Management Review, 40(3), 265–276. Focus, H. R. (2007). Why culture can mean life or death for your organization. HRFocus, 84, 9. Galford, R., & Drapeau, A. S. (2003). The enemies of trust. Harvard Business Review. Gold, A. H., Malhortra, A., & Segars, A. H. (2001). Knowledge Management: An Organizational Capabilities Perspective. Journal of Management Information Systems, 18(1), 185–214. Grant, R. M. (1991). The resource-based theory of competitive advantage: Implication for strategy. California Management Review, 22, 114–135.
209
The Effect of Organizational Trust on the Success of Codification and Personalization KM Approaches
Griffin, K. (1967). The contribution of studies of source credibility to a theory of interpersonal trust in the communication process. Psychological Bulletin, 68, 104–120. doi:10.1037/h0024833
King, W. R. (2007). A Research Agenda for the Relationships Between Culture and Knowledge Management. Knowledge and Process Management, 14(3), 226–236. doi:10.1002/kpm.281
Hansen, M. T., Nohria, N., & Tierney, T. (1999). What’s your strategy for managing knowledge? Harvard Business Review, 77(2), 106–116.
Kinsey Goman, C. (2002a, June, 22). Five Reasons people don’t tell what they know Retrieved February, 5, 2003, from http://destinationkm.com/ articles/default.asp?ArticleID=960
Hatcher, L. (1994). A step by step approach to using SAS for factor analysis and structural equation modeling. Cary, N.C: SAS. Hinds, P., & Pfeffer, J. (2003). Why organizations don’t know what they know: cognitive and motivational factors affecting the transfer of expertise. In Ackerman, M., Pipek, V., & Wulf, V. (Eds.), Sharing expertise: beyound knowledge management (pp. 3–26). Cambridge, MA: MIT Press. Hubert, C. (2002). Knowledge Management: It’s About Engaging Your Culture, Not Changing It Retrieved July, 17, 2004, from www.apqc.org/ portal/apqc/site/content?docid=107492 Jennex, M. E., & Olfman, L. (2003). A Knowledge Management Success Model: An Extension of DeLone and McLean’s IS Success Model. Paper presented at the Ninth Americas Conference on Information Systems. Jennex, M. E., & Olfman, L. (2004). Assessing Knowledge Management Success/Effectiveness Models. Paper presented at the 37th Hawaii International Conference on System Sciences. Keskin, H. (2005). The relationships between explicit and tacit oriented KM strategy, and firm performance. Journal of American Academy of Business, 7(1), 169–175. King, W. R. (2006). Maybe a “knowledge culture” isn’t always so important after all! Information Systems Management, 23(1), 88–89. doi:10.1201 /1078.10580530/45769.23.1.20061201/91776.10
210
Kinsey Goman, C. (2002b). What leaders can do to foster knowledge sharing. Knowledge Management Review, 5(4), 10–11. Know-Net. (2000). The approach, from http:// www.know-net.org Knowledge Management Review (2001, November/December). KM Review survey reveals the challenges faced by practitioners, 4, 8-9. Kramer, R. M. (2007). Organizational Trust: A Reader. USA: Oxford University Press. Lee, H., & Choi, B. (2003). Knowledge Management Enablers, Processes, and Organizational Performance: An Integrative View and Empirical Examination. Journal of Management Information Systems, 20(1), 179–228. Levin, D. Z., Cross, R., & Abrams, L. C. (2002a). Trust and knowledge sharing: a critical combination. IBM Institute for Knowledge-Based Organizations. Levin, D. Z., Cross, R., & Abrams, L. C. (2002b). Why should I trust you? (White paper presented at 2002 Academy of Management meetings). Lindsey, K. (2002). Measuring Knowledge Management Effectiveness: A Task-Contingent Organizational Capabilities Perspective. Paper presented at the Eighth Americas Conference on Information Systems. Luhmann, N. (1979). Trust and Power. New York: John Wiley.
The Effect of Organizational Trust on the Success of Codification and Personalization KM Approaches
Markus, M. L. (2001). Toward a Theory of Knowledge Reuse: Types of Knowledge Reuse Situations and Factors in Reuse Success. Journal of Management Information Systems, 18(1), 57–93.
Nelson, K. M., & Cooprider, J. G. (1996). The Contribution of Shared Knowledge to IS Group Performance. MIS Quaterly, 20(4), 409–432. doi:10.2307/249562
Massey, A. P., M.M., M.-W., & O’Driscoll, T. M. (2002). Knowledge Management in Pursuit of Performance: Insights from Nortel Networks. MIS Quaterly, 26(3), 269-289.
Pauleen, D., & Mason, D. (2002). New Zealand Knowledge Management Survey: Barriers and Drivers of KM Uptake Retrieved January 10, 2004, from http://www.nzkm.net/mainsite/ NewZealandKnowledgeManagementSurveyBarriersandDriv.html
Matthai, J. M. (1989). Employee perceptions of trust, satisfaction, and commitment as predictors of turnover intentions in a mental health setting, Unpublished Doctoral dissertation, Vanderbilt University. McAllister, D. J. (1995). Affect and cognitionbased trust as foundations for interpersonal cooperation in organizations. Academy of Management Journal, 38(1), 24–59. doi:10.2307/256727 McDermott, R. (1999). Why Information Technology Inspired but Cannot Deliver Knowledge Management. California Management Review, 41(4), 103–117. McKnight, D. H., Choudhury, V., & Kacmar, C. (2002). Developing and Validating Trust Measures for e-Commerce: An Integrative Typology. Information Systems Research, 13(3), 334–359. doi:10.1287/isre.13.3.334.81 McKnight, H. D., & Chervany, N. L. (2000, August 10-13). What is Trust? A Conceptual Analysis and an Inderdisciplinary Model. Paper presented at the Americas Conference on Information Systems (AMCIS), Long Beach California. Microsoft (1999). Practicing Knowledge Management. Natarajan, G., & Shekhar, S. (2000). Knowledge management: Enabling Business growth. New Delhi: Tata McGraw-Hill.
Politis, J. D. (2003). The connection between trust and knowledge management: what are its implications for team performance. Journal of Knowledge Management, 7(5), 55–66. doi:10.1108/13673270310505386 Rao, M. (2002, August, 23). Eight Keys to Successful KM practice Retrieved February, 1, 2003, from http://www.destinationkm.com/articles/ default.asp?ArticleID=990 Renzl, B. (2008). Trust in management and knowledge sharing: The mediating effects of fear and knowledge documentation. Omega, 36, 206–220. doi:10.1016/j.omega.2006.06.005 Ribière, V. (2001). Assessing Knowledge Management Initiative Successes as a Function of Organizational Culture. Unpublished D.Sc. Dissertation, The George Washington University, Washington DC. Ribière, V. (2005). The critical role of trust in knowledge management (Le rôle primordial de la confiance dans les démarches de gestion du savoir). Unpublished PhD dissertation (Management Sciences) Université Paul Cézanne, Aix en Provence (France) - available on Proquest. Rigby, D., & Bilodeau, B. (2007). Management Tools and Trends 2007. Bain & Company.
211
The Effect of Organizational Trust on the Success of Codification and Personalization KM Approaches
Rolland, N., & Chauvel, D. (2000). Knowledge Transfer in Strategic Alliances. In Despres, C., & Chauvel, D. (Eds.), Knowledge Horizons (pp. 225–236). Butterworth-Heinemann. doi:10.1016/ B978-0-7506-7247-4.50014-8 Sako, M. (2006). Does trust improves business performance? In Kramer, R. M. (Ed.), Organizational Trust: A Reader (pp. p267–p294). Oxford University Press. Schoorman, D. F., Mayer, R. C., & Davis, J. H. (2007). An integrative model of organizational trust: Past, Present and Future. Academy of Management Review, 32(2), 344–354. Schulz, M., & Jobe, L. A. (2001). Codification and tactiness as knowledge management strategies: An empirical exploration. [Article]. The Journal of High Technology Management Research, 12(1), 139–165. doi:10.1016/S1047-8310(00)00043-2 Shaw, R. B. (1997). Trust in the Balance. Building Successful Organizations on Results, Integrity, and Concern. San Francisco: Jossey-Bass. Swan, J., Newell, S., Scarbrough, H., & Hislop, D. (1999). Knowledge Management and innovation:networks and networking. Journal of Knowledge Management, 3(4), 262–275. doi:10.1108/13673279910304014 Swap, W., Leonard, D., Shields, M., & Abrams, L. (2001). Using Mentoring and Storytelling to Transfer Knowledge in the Workplace. Journal of Management Information Systems, 18(1), 95–114. Tan, H. H., & Lim, A. K. H. (2009). Trust in Coworkers and Trust in Organizations. [Article]. The Journal of Psychology, 143(1), 45–66. doi:10.3200/JRLP.143.1.45-66 Teoh, K. K., & Avvari, M. (2004). Integration of TAM Based Electronic Commerce Models for Trust. Journal of American Academy of Business, 5(1/2), 404–410.
212
Von Krogh, G. (1998). Care in Knowledge Creation. [Article]. California Management Review, 40(3), 133–153. Welch, J. (1993, Jan 25). Jack Welch’s lessons for success. Fortune, 127, 86–91. Wick, C. (2000). Knowledge management and leadership opportunities for technical communicators. Technical Communications, 47(4), 515–529. Williams, S. (2004). Building and repairing trust Retrieved October, 2008, from http://www.wright. edu/~scott.williams/LeaderLetter/trust.htm Wu, J. (2008). Exploring the link beween knowledge management performance and firm performance. Lexington, Kentucky: University of Kentucky. Zack, M. H. (1999). Managing Codified Knowledge. Sloan Management Review, 40(4), 45–58. Zack, M. H., & Michael, S. (1998). Knowledge Management and Collaboration Technologies, from http://www.lotus.com/services/institute.nsf/ 550137bfe37d25a18525653a005e8462/000021ca Zaheer, A., McEvily, B., & Perrone, V. (1998). Does Trust Matter? Exploring the Effects of Interorganizational and Interpersonal Trust on Performance. Organization Science, 9(2), 141–159. doi:10.1287/orsc.9.2.141 Zand, D. E. (1997). The leadership Triad - Knowledge, Trust, and Power. New York, NY: Oxford University Press.
213
Chapter 12
Advancing the Success of Collaboration Centered KM Strategy Johanna Bragge Aalto University School of Economics, Finland Hannu Kivijärvi Aalto University School of Economics, Finland
abStract Knowledge is today more than ever the most critical resource of organizations. At the same time it is, however, also the least-accessible resource that is difficult to share, imitate, buy, sell, store, or evaluate. Organizations should thus have an explicit strategy for the management of their knowledge resources. In this chapter we pay special attention to a KM strategy called collaboration centered strategy. This strategy builds on the assumption that a significant part of personal knowledge can be captured and transferred, and new knowledge created through deep collaboration between the organization’s members. A critical element in the collaboration centered KM strategy is the facilitation process that involves managing relationships between people, tasks and technology. We describe how the Collaboration Engineering approach with packaged facilitation techniques called ThinkLets is able to contribute to this endeavour.
introduction Knowledge is today more than ever the most critical resource of organizations. At the same time it is, however, also the least-accessible resource that is difficult to share, imitate, buy, sell, store, or evaluate. As for any other critical resource, organizations should have an explicit strategy for the management of knowledge resources, too. Organizations should plan how to harness DOI: 10.4018/978-1-60566-709-6.ch012
knowledge resources successfully in relation to organizational goals, objectives and strategies. What makes it challenging is that knowledge in organizations is typically dispersed in the minds of its members, working routines and processes, organizational rules, etc. Part of the knowledge is highly personal, difficult or even impossible to transform to wider usage. Especially the content of so-called tacit knowledge that is hidden even from its owner is difficult to harness, and it requires special arrangements to ‘convert’ or transfer it to wider organizational usage. Smith et
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
Advancing the Success of Collaboration Centered KM Strategy
al. (2007) claim that knowledge managers have started to recognize that they need to become more sophisticated in their system-driven approach to facilitating knowledge transfer, as it just doesn’t suffice to build a database with codified knowledge and wait it to be used (see discussion also in Cross and Baird, 2000). Recently, Mäki (2008) has found that knowledge-intensive organizations encounter problems with the management of encoded knowledge and information. He recommends common organizational practices to support the use and application of both encoded and tacit knowledge in organizations. Smith et al. (2007) divide the four main types of tacit knowledge that organizations wish to transfer into best practices, expertise, experience and innovation. Regarding best practice transfer Smith et al. (2007) claim that it is probably the one which most lends itself to technical facilitation and has the clearest value proposition associated with it. Moreover, expertise takes a long time to develop, and thus, it would be beneficial for companies to find ways to develop it more rapidly (Smith et al., 2007). Besides knowledge transfer initiatives also measures to ensure collaboration must be taken in order for organizations to create knowledge, to innovate (Kolfschoten, 2007) and to overcome the frequent resistance to share knowledge (Thomas, 2006). Collaboration, defined as “joint effort toward a goal” (Kolfschoten, 2007) or as the “extent to which individuals actively communicate, cooperate, and help one another in their work by sharing knowledge and expertise with one another” (Thomas, 2006), is found to be one of the critical factors for knowledge management systems success besides top management leadership and compensation schemes (Thomas, 2006). Similarly, Hansen and Nohria (2004) and Tapscott (2006) have emphasized the necessity of fostering collaboration for competitive advantage. Regarding collaborative knowledge work practices, Mäki (2008) has found that modern IT applications have
214
not been able to replace the quality, or the need, of face-to-face interaction. Face-to-face collaboration, however, gets fairly time-consuming and challenging in process-wise as the group size grows over 3 or 4 people. Already early studies on organizational effectiveness have found that group tasks typically result in process losses (Lorge and Solomon, 1955; Steiner, 1972). These losses may occur due to production blocking (e.g. when waiting for one’s own turn to speak), evaluation apprehension, poor coordination or motivational problems. In order to mitigate the process losses - and simultaneously to stimulate the process gains like synergy and learning - researchers in management information systems have proposed the deployment of special type of groupware - Group Support Systems (GSS) - in group tasks (Huber, 1984; DeSanctis and Gallupe, 1987; Nunamaker et al. 1991). The goal of GSS is to help organizational teams make faster, more satisfying, and ultimately better decisions than those made in face-to-face, manually supported meetings (Fjermestad, 1998). The GSS installations typically consist of 10-30 networked computers in the same meeting room, having special software that enable parallel and anonymous input, real-time voting, group memory and automated reporting of the meeting minutes. The meetings are normally administered by a facilitator, following a predefined agenda that is built together with the problem owner. Process facilitation has been found to be among the most critical success factors for effective and efficient collaboration (Anson et al, 1995; Niederman et al, 1996; Ackermann et al, 2005; Dennis and Wixom, 2001; Bragge et al. 2007). An extensive amount of research from both experimental and field studies have found that the efficiency and effectiveness of facilitated group work may indeed be increased by GSS - savings up to 50% in individual work hours have been reported when compared to regular meetings (Fjermestad and Hiltz, 1999, 2000). Despite the significant efficiency gains accrued, the GSS have
Advancing the Success of Collaboration Centered KM Strategy
not diffused to organizations as one would have expected (Briggs et al., 2003; Kolfschoten, 2007). However, to tackle this dilemma, a new stream of research called Collaboration Engineering (CE) has recently emerged from the literature on GSS (Briggs et al., 2003). CE researchers are developing guidelines to the design process that foster high-quality collaboration processes. The ultimate goal of CE is that recurring collaborative work practices could be executed by the practitioners by themselves without the ongoing support from professional facilitators, which tend to be a scarce resource in organizations (Briggs et al., 2003; Kolfschoten et al., 2006; de Vreede et al. 2009). In this chapter we pay special attention to a KM strategy called collaboration centered strategy. This strategy builds on the assumption that a significant part of the personal knowledge can be captured and transferred and new knowledge created through the deep collaboration between the organization’s members (and sometimes also with its external stakeholders). A critical element in the collaboration centered KM strategy is the facilitation process that “involves managing relationships between people, tasks and technology, as well as structuring tasks and contributing to the effective accomplishment of the meeting’s outcome“ (den Hengst and Adkins, 2007; Clawson et al. 2003). We believe that the CE approach is able to provide valuable and concrete progression for this. The success of a collaboration centered KM strategy is, however, not easy to assess. It is clear that the success of a collaboration-centered KM strategy is multidimensional and context specific. In addition to task specific outcomes, collaboration processes have group related as well as facilitator related outcomes. The purpose of this chapter is to discuss the multidimensional nature of knowledge management (Nissen and Jennex, 2007) and the multidimensional nature of the success of a collaboration centered KM strategy. We are particularly interested in the facilitators’ roles and responsibilities within the collaboration processes. We formulate our research question followingly:
“How to advance the success of a collaboration centered KM strategy and, especially, what is the role of the facilitator in it?” To answer our research question, we will first present the conceptual background from the literature, and then propose a framework to show the different constituents, structures, processes and possible outcomes of the collaboration centered KM strategy as well as the critical role of the facilitator in the adopted KM strategy. The framework serves as a basis in evaluating and measuring the success of the collaboration centered KM strategy. We discuss the potentials of the collaboration centered KM strategy in lines with the proposed framework.
concePtual bacKgroundS Knowledge types Knowledge is today more than ever the most critical resource of organizations and the impelling force of individuals. Knowledge requires human judgement, is closely related to action, and presupposes values and beliefs. Polanyi (1962) tied personal dimension to all knowledge and his master-dichotomy between tacit and explicit knowledge has shaped practically all epistemological discussion, especially since the rediscovery and popularization made by Nonaka and Takeuchi (1995). Knowledge is traditionally interpreted as a singular, independent object. Another, procedural interpretation of knowledge is to see it as a path, consisting of related steps (Carlile and Rebentisch, 2003). A wider interpretation is even to see the knowledge as a network or a system where every element is interrelated directly or indirectly with each other. Tsoukas and Vladimirou (2001, p. 979) define knowledge by means of the person’s ability to draw distinctions: “Knowledge is the individual ability to draw distinctions within a collective domain
215
Advancing the Success of Collaboration Centered KM Strategy
of action, based on an appreciation of context or theory, or both.” According to this definition, a person is more knowledgeable if she/he can draw finer distinctions. The value of those distinctions is evaluated when used in judgements once actions are taken. Making distinctions, judgements, classifications, structurings, and getting chaos under control are capabilities of an expert having knowledge. Kivijärvi (2008) has elaborated the above characterization of knowledge further and defines knowledge as individual or organizational ability to make decisions. All actions are consequences of decisions. When defining knowledge we should note that decisions are more than distinctions; they are value-driven in the sense that they aim to achieve a specific goal or a set of goals. Oftentimes, knowledge is defined also as ‘justified true belief’. It is clear that knowledge is fuzzy and closely linked to persons who hold it. It rarely remains fixed but its categories and meanings transform frequently. Therefore, knowledge is dynamic and context specific. Without a context, it is just information. One potential context of knowledge creation and use is the organizational context. Organizations have a common capability to act, i.e. knowledge capacity or intellectual capital (Stewart, 1999), the lack of which would inevitably prevent organizational action and would lead to an unpredictable disorder and confusion. “Organizational knowledge is processed information embedded in routines and processes that enable action. It is also knowledge captured by the organization’s systems, processes, products, rules, and culture” (Myers 1996). According to Tsoukas and Vladimirou (2001) organizational knowledge is “the set of collective understanding embedded in a firm” (p. 981). It is the capability that the “members of an organization have developed to draw distinctions in the process of carrying out their work, in particular concrete contexts, by enacting sets of generalizations (propositional statements) whose application depends on historically evolved collective understandings and
216
experiences” (Tsoukas & Vladimirou, 2001, p. 983). In the organizational context, personal (individual) knowledge and organizational knowledge are created, manipulated, transformed and used in decision making. Personal knowledge is used for personal decision making whereas organizational knowledge is utilized in organization wide decision making. Personal knowledge is always tied to personal action and personal valuation, while organizational knowledge is tied to organizational valuation. In addition to the division between explicit and tacit, and on the other hand between the personal and organizational knowledge, there are several other types of knowledge, some of which are conscious, others preconscious. Choo (1998) goes on to classify knowledge into three groups: tacit knowledge, explicit knowledge, and cultural knowledge. Scharmer (2001) divides tacit knowledge into tacit embodied knowledge and self-transcending knowledge. Holsapple and Whinston (1996) define three primary types of knowledge: descriptive, procedural, and reasoning knowledge, and three secondary types of knowledge: presentation, linguistic, and assimilative knowledge. Savage (1996) differentiates five types of knowledge: know-how, know-who, know-what, know-why, and know-when, and Liebman (1998) classifies knowledge into procedural knowledge, declarative knowledge, and conditional knowledge. In organizational contexts, knowledge resources include all these types, and the challenge of knowledge management is to advance the exploitation of multidimensional knowledge resources to the success of the whole organization, as well as to individual satisfaction.
Knowledge Management Strategies According to the resource-based view of the firm (Barney 1986, Penrose 1959, Wernerfelt, 1984) firms should position themselves strategically based on their rare, valuable, nonsubstitutable, and imperfectly imitable resources and capabilities
Advancing the Success of Collaboration Centered KM Strategy
instead of the products and services. It is assumed that the collection of resources including tangible and intangibles assets, knowledge and skills are the primary predictors also of the market-based and financial-based performance. According to this approach the competitive advantage of a firm is eventually based on resource heterogeneity and resource immobility. In the markets, there are not similar organizations with similar resource-bases and competitors find it impossible or difficult to imitate or substitute these resources. As for any other critical resource, organizations should have an explicit strategy for the management of knowledge resources, too. Knowledge management strategy is a type of resource-based strategy, and it is the way or a scheme to do epistemic work in organizational context, that is, to create, convert, share, storage, secure, use, and evaluate knowledge resources in organizational context. Because knowledge resources are at least partly tacit and contextual, i.e. organization specific, it cannot be directly explicated, purchased from markets, and moved from an organization to another. Knowledge strategy is based on experience, continuous learning and routines. In order to create similar knowledge, competitors have to engage similar experiences, create similar routines, etc. It is a process that takes time, and thus, the business strategies of an organization based on its unique intellectual resources are more competitive and sustainable. When proposing the construct ‘business strategic orientation’, Venkatraman (1989) applies the distinction between intended and realized strategies. Strategic intent is associated with a priori strategic choices, whereas the realized strategy is defined as a consistent pattern of behavior in the organization (Mintzberg 1978, Mintzberg and Waters 1985). As strategic intent can be regarded as a thought test that results in a realized strategy if the intent is carried through, the strategic intent may be changed easily. We should note that most of the realized strategies emerge without preconception. Knowledge strategy can also be an intended
or realized one, and it can even emerge without foregoing formal planning. Part of organizational knowledge can be effectively codified. Codification is an IT-centric strategy to manage knowledge, and to transform it over an organization. It is an opposite strategy to the personalization strategy (Hansen et al. 1999). By the codification strategy, “knowledge is extracted from the person who developed it, made independent of that person and reused for various purposes” (Hansen et al. 1999). This strategy is based on reuse-economics, investments in IT, and the transformation of knowledge from people to documents and computers. On some occasions, the difference between codified and computerized knowledge and information might be marginal. In addition to codification and personalization, knowledge (management) strategy can be defined through • • • •
external or internal learning, radical or incremental learning, learning speed, and the breadth of knowledge base (Bierly and Chakrabarti, 1996).
Knowledge strategy can also be based on exploration or exploitation of external or internal knowledge sources (Zack 1999). For choosing an appropriate knowledge strategy, Zack suggests SWOT analysis to evaluate the potentials of the strategy alternatives.
Models for evaluating the Success of Knowledge Management Knowledge management (KM) is that part of the organizational administration that focuses on the management of the knowledge resources and information capital surrounded by an organization. According to an industry survey (KPMG 2002/2003) “companies use knowledge management to realise synergies among units (83%), accelerate innovation (63%), achieve higher customer added value
217
Advancing the Success of Collaboration Centered KM Strategy
(74%), reduce costs (67%), improve quality (70%) and reduce exposure to risks (26%)”. In a wealthy organization, knowledge management exceeds all organizational layers as well as all functional borders. In addition to an operative, every-day tool it is also a strategic weapon in the road to achieve competitive advantages. In general, knowledge management is a broad concept including e.g. technological, organizational, behavioral, and managerial issues. Based on two exploratory surveys generated from the literature Jennex et al. (2007, 2008) define KM success as follows: “KM success is a multidimensional concept. It is defined by capturing the right knowledge, getting the right knowledge to the right user, and using this knowledge to improve organizational and/or individual performance. KM success is measured by means of the dimensions: impact on business processes, impact on strategy, leadership, and knowledge content”. Some of the KM evaluation models origin clearly from the KM domain, whereas others are borrowed from neighbor disciplines like management information systems. The IS success model as suggested by DeLone and McLean (1992) has achieved a permanent status and a kind of reference point in later success studies. In the original model, system quality and information quality have an effect on user satisfaction and IS use, which in turn impact the organization through individual impacts. This model has been later validated, applied, criticized, updated, and extended to various dimensions in a number of studies (Seddon 1997, Wu and Wang, 2006, Briggs et al, 2002, Jennex and Olfman, 2006). Seddon (1997), for example, reordered the constructs and placed the use of a system after perceived usefulness and user satisfaction. DeLone and McLean (2003) updated their model by adding ‘service quality’ and ‘intentions to use’ and incorporating ‘individual impact’ and
218
‘organizational impact’ into ‘net benefits’. They also added feedback relations to the model. Jennex and Olfman (2006) have applied the principles of the model to evaluate the success of KM and knowledge management systems (KMS). When the model was exported from the IS to the KM domain, notable redefinitions of the constructs had to be made. Also Wu and Wang (2006) respecified the DeLone and McLean model in order to measure KMS success. In the empirical analyses they found considerable support for the model. Khalifa et al. (2008) argue that the usage of KMS does not necessarily lead to organizational performance improvement but is mediated by knowledge-intensive capabilities such as agility and innovativeness. Kulkarni et al. (2006) depart from the DeLone and McLean model by looking at KMS implementation–related issues like the development of organizational arrangements, policies, processes, and incentives. Their study is also a more generalized, broader effort across different organizations instead of studying a single system in a particular organization. Common to the above studies is to utilize ‘user satisfaction’ as an intermediary variable between dependent and independent variables. On the other hand, Lai et al. (2008) define user satisfaction as a major indicator of KMS success. The task-technology fit model presented by Goodhue and Thompson (1995) extends the DeLone and McLean model by highlighting the importance of task-technology fit in explaining how technology leads to performance impacts. This model argues that the use of a technology may result in different outcomes, depending upon the match between technology features and the requirements of the task. Technology includes a wide range of IT, such as hardware, software, data, user support, etc. Tasks are broadly defined as the actions carried out in turning inputs to outputs in order to satisfy information needs. The model proposes that information systems “have a positive impact on performance only when there is correspondence between their functionality and
Advancing the Success of Collaboration Centered KM Strategy
the task requirements of users” (Goodhue and Thompson, 1995, p. 214). The original Goodhue and Thompson model has been applied to evaluate the success, effectiveness, or performance of knowledge management systems (Lin and Huang, 2008), group support systems (Zigurs and Buckland, 1998, Zigurs et al. 1999, Murthy and Kerr 2000), the system’s perceived ease of use, etc. A number of evaluation models are defined inside the KM domain trusting relatively closely on KM concepts and theories. Muhammed et al. (2008) focus on KM success at individual level. They build on conceptual, contextual, and operational knowledge. The success of KM is finally measured by two constructs; the extent to which individuals generate and apply new innovations in their work, and how well the individuals’ work is done. Lindsey (2002) defines KM effectiveness by means of ‘knowledge infrastructure capability’ and ‘knowledge process capability’. In his model ‘knowledge infrastructure capability’ refers to social capital (the relationships between knowledge sources and users) and ‘knowledge process capability’ to the integration of the KM processes into the organization. Lindsey’s (2002) success model is also process-oriented because it defines KM processes by four phases: acquisition (capturing of knowledge), conversion (making captured knowledge available), application (degree to which knowledge is useful), and protection (security of the knowledge). Lin (2007) uses the same process model to study whether KM processes are changed over time in order to improve KM effectiveness.
Knowledge Sharing and collaboration Knowledge grows from sharing (Sveiby, 1997). Knowledge sharing is a process through which knowledge is exchanged among individuals, inside or between groups or organizations. Knowledge transformation between individuals is an activity where knowledge is not divided but multiplied.
However, knowledge sharing is a problematic concept to some extent. First, although knowledge sharing is a prerequisite for knowledge growth, sharing knowledge is no different from other businesses – valuable knowledge is not shared without compensation. “Knowledge is a competitive resource not only on the organizational level but also on the individual level. People do not share knowledge without a strong personal motivation, and they would certainly not give it away without concern for what they may gain or lose by doing so.” (Stenmark, 2001, p. 21). Thus, knowledge is power. Secondly, a significant part of knowledge is tacit, hidden even from the holder of the knowledge, and it is difficult if not impossible to convert it to an explicit form. Actually, knowledge conversion may be regarded as a process where one form of knowledge is generated in the context of acting with the aid of another type of knowledge (Cook and Brown, 1999, p. 385). Specifically, the tacit dimension of personal knowledge manifests itself only in the knowing process. Collaboration between individuals can be such a process where two or more people work together toward a common goal by sharing knowledge, bargaining and searching compensations. Knowledge sharing and collaboration should be fostered by organizations as “knowledge created within the firm is especially valuable because it tends to be unique, specific and tacitly held”, being therefore more difficult for competitors to imitate, and making it strategically valuable Zack (1999, p. 138-9). Oftentimes the internal knowledge creation should be combined with external information/knowledge (Fedorowicz et al, 2008), which is found from literary sources or is communicated through the organization’s stakeholders, e.g. network partners or customers. For knowledge oriented organizational settings, Kolfschoten (2007) defines collaboration as “joint effort toward a goal”. When defined this way, goal achievement, logically, is a success factor of collaboration. Furthermore, collaboration itself is not a goal; “it is a process, instrumental to a
219
Advancing the Success of Collaboration Centered KM Strategy
goal” (ibid.). Viewing collaboration as a process, it renders itself to managerial actions, and thus, it can be managed. It is clear that there exists a continuum of possible collaboration processes, from ad-hoc traditional meetings with 2-3 people without a preplanned agenda to large-scale meetings that have been preplanned for weeks and are supported with advanced technology and process facilitation. When deciding how to foster collaboration, especially for high-value tasks, organizations need think a range of issues, and weigh the pros and cons of possible approaches, and make tradeoffs e.g. between customized and off-theshelf solutions. Regarding process facilitation, professional facilitators tend to be expensive and scarce, while training internal facilitators takes extensive amounts of time for them to become as experienced and skilled as professionals. Also the faithful appropriation of possible technology to be employed, that is, using it as intended by the systems designers (Dennis et al. 2001) is challenging for novices. The Collaboration Engineering (CE) research has emerged to tackle all these issues, and it can provide invaluable support for significant collaborative tasks. We will describe the approach in the next section.
collaboration engineering (ce) approach CE is an approach to designing collaborative work practices for high-value recurring tasks, and deploying those as process prescriptions for practitioners to execute for themselves without ongoing support from professional facilitators (de Vreede and Briggs, 2005; Kolfschoten, 2007). The CE approach makes thus a distinction between the roles regarding the design and execution of a collaboration process, both of which are traditionally on the responsibility of the facilitator. In CE, the tasks are split up: the process design is the collaboration engineer’s task, while the recurring process execution is left for the practitioner.
220
The collaboration engineer (CEer) begins the task analysis by decomposing the process into several activities (generate, reduce, clarify, organize, evaluate or build consensus) and mapping them with facilitation techniques (Kolfschoten, 2007). In this, the CEer uses standardized and codified facilitation intervention components called thinkLets, which yield predictable, reusable patterns of collaboration among people working together toward a goal. The collaboration process designs provide ready-to-apply CE recipes to be conducted by practitioners, according to the ultimate aim of the CE approach (de Vreede and Briggs, 2005; Kolfschoten et al. 2006). Before transferring the process description to the practitioner, it is important to validate (and adjust) the design, e.g. by pilot-sessions, expert validations or simulations. Also during the transfer, several learning efforts for the practitioner (in the process training, in preparing to apply it, and in first trials of the process execution) may uncover problems that require refinements for the design. When completing the transfer phase, full-scale implementation - and eventually also sustained use - in the organization may take place. This calls for managerial activities, planning and organization. For instance, the management should stimulate the use of the CE process through incentives and controls. As has been found in traditional facilitation, also here the success of the practitioner as a facilitator is key to the successful implementation of the process. (Kolfschoten, 2007). To sum up, the CE approach is not purported for ad-hoc one off processes, but for recurring, high-value tasks. If the task does not recur, it would not benefit the practitioners to learn how to execute the process (Kolfschoten, 2007). However, the informative CE process designs are still able to provide additional learning insights, and ease the preparatory communications between all stakeholders, although the CEer would execute the process (see discussions in Bragge et al. 2005a; 2007).
Advancing the Success of Collaboration Centered KM Strategy
Figure 1. GSS research framework of Pinsonneault and Kramer (1989)
Models for evaluating the Success of gSS-Mediated group Performance In this section we review the research that has dealt with the success evaluation of GSS-mediated group meetings. According to Kline and McGrath (1999), the process loss model (Lorge et al., 1958; Steiner, 1972) is by far most popular approach in examining traditional team performance, and it has been adopted in most GSS studies also. The model supposes that there is input (e.g. group member expertise, personality attributes, experience), which is followed by a process that includes the interaction between the group members. This is followed by an output, which may include e.g. the number and quality of ideas generated. Regarding the process itself, the losses may occur, for example, due to production blocking (waiting for one’s own turn to speak), evaluation apprehension, poor coordination or motivational problems (see Nunamaker et al. 1991). Besides the process loss related variables (e.g. anonymity, which reduces the reluctance to contribute information), different contextual variables have also been taken into account when studying GSS group performance. For instance, group size, participant proximity (distributed or face-to-face) and cultural differ-
ences have been extensively studied. Pinsonneault and Kraemer (1989) categorized the contextual variables into five classes of personal factors, situational factors, group structure, technological support and task characteristics (see Figure 1). The task-technology fit (TTF) model discussed earlier has also been utilized extensively in GSS research (see e.g. Zigurs and Khazanchi, 2008). Already DeSanctis and Gallupe (1987), when setting the foundations for the study and design of GSS, suggested that also the task type confronting the group should be taken into account. Their task dimensions were drawn from the well-known task circumplex of McGrath (1984), which includes four main task types: generation (creativity, planning), choice (intellective, decisionmaking), negotiation (cognitive conflict, mixed-motive) and execution (performance, competitive) tasks. TTF theories are intended to provide guidance for and understanding of how best to match a tool with a problem, e.g. an appropriate set of collaboration technology capabilities with a particular group task and context (Zigurs and Khazanchi, 2008). Besides the TTF model of Zigurs and Buckland (1999), the media richness theory (Daft and Lengel, 1986), the channel expansion theory (Carlson and Zmud, 1999),
221
Advancing the Success of Collaboration Centered KM Strategy
the adaptive structuration theory (DeSanctis and Poole, 1994) and the fit-appropriation model (Dennis et al, 2001) were listed by Zigurs and Khazanchi (2008) as representative TTF theories employed and tested in GSS studies. Fjermestad (1998) has done an extensive study and consolidated the work of previous GSS frameworks. His integrated framework includes as four major factors the contextual, intervening, adaptation and outcome variables. At the input side are the static contextual factors, which are all the external or driving variables that comprise the environment or conditions for the task. They are divided under context, group, task and technology categories. The interaction process is divided into two parts. The first part, the intervening factors (either static or dynamic) represent the emergent structuring of the group interaction derived from and adding to the set of conditions created from the context of the group sessions. In the second part the group’s dynamic adaptation (or interaction process) factors are controlled by the group on an individual or a collective basis. The changes in these variables act to influence the intervening variables, sometimes together with contextual factors. The static outcomes are the results of the interplay of the intervening factors and adaptation of the group with the contextual factors, and they are divided under efficiency, effectiveness, satisfaction, consensus and usability measures. (Fjermestad, 1998). Fjermestad (ibid, p. 104) has called his integrated factors model as a ‘prototheory’, which predicts contingent relations of the form: •
• •
222
if the GSS technology (tools and embedded structures) is appropriate to the group, tasks and environmental context; and if intervening factors are appropriate (such as adequate training); and if the group’s adaptive structuration of the tools and procedures provided is faithful, so that the intended process gains are achieved and process losses avoided;
•
then GSS will lead to certain desirable outcomes (such as better decisions).
Den Hengst et al. (2006) reviewed Pinsonneault and Kraemer’s (1989), Fjermestad and Hiltz’s (1999, 2000) as well as some other constructs that have been used to evaluate the success of collaboration processes, and they formulated additional quality constructs for reusable collaboration processes designed using the Collaboration Engineering (CE) approach. The additional constructs are reusability (portability, adaptability, specification), predictability (difference in input, difference in output, reliability, robustness) and transferability (conceptual load, perceptual load, access load), and they are recommended to be regarded with the CE approach. Naturally, it is impossible to measure everything and take every single aspect into account. Thus, a selection of all quality constructs must be made according to the goal of the collaboration process and the goal of the evaluation itself. (Den Hengst et al. 2006) Kolfschoten (2007) continues the above research in distinguishing the ‘quality of collaboration’ (appreciation of joint effort by relevant stakeholders measured by effectiveness, efficiency, productivity, commitment and satisfaction) from the ‘quality of the collaboration process design’. She first presents the CE approach in lines with the Business Process Change paradigm of Kettinger and Teng (1997), which uses a process reengineering life cycle to describe the process from envisioning to inauguration, to diagnosis, to (re-) design, to (re-) construction and to evaluation. The CE approach uses similar phases and steps to analyze, design, deploy and evaluate the new collaboration processes. It distinguishes, similar to envisioning, an (1) initial state in which the applicability and added value of the approach and the investment is addressed. Next, (2) the design team is established (inauguration) and the task is analyzed. Then, (3) goal setting, task diagnosis and design can begin. After these often iterative steps, the design is finished, and (4) transfer,
Advancing the Success of Collaboration Centered KM Strategy
piloting and (5) implementation can start. Once the process is implemented it can be (6) adopted by the organization to eventually become a sustained work practice. (Kolfschoten, 2007, p. 22). Based on the above phased model, Kolfschoten (2007, p. 31) defines the ‘quality of the collaboration process design’ generally as the “degree to which the CE design supports a practitioner to support the group in achieving its goal”. It can be measured on five quality dimensions: efficaciousness, acceptance, reusability, predictability and transferability (ibid, p. 43). In the following discussion knowledge strategy is understood as a knowledge management strategy. Next, the success of a potential knowledge strategy, collaboration centered strategy, is discussed.
a fraMeworK for adVancing tHe SucceSS of tHe collaboration centered KM StrategY the framework As discussed in the previous section the collaboration centered KM strategy is a process-oriented strategy where the process proceeds from the inaugural phase to the decision and evaluation phases. Each of the phases can be performed better or worse and the quality of those phases is influenced by a number of input factors. The input factors may be classified into four categories: task, technology, group, and facilitator. It is clear that the task in question has significant effects on the collaboration process and the quality of its phases. Available data and information, problem complexity, level of conflict and urgency are just examples of the task related factors. Moreover, the available technology can place limits or also give opportunities to the process flow. For example, web-based collaborative technologies (including both GSS and videoconferencing ‘see-
you see-me’ capabilities) enable more fluently geographically distributed meetings than local meeting-room based technologies. Sometimes there is no ICT technology (wanted) to be used. In order to collaborate, a group of people is needed. The members of the group have different backgrounds, experiences, education, knowledge, etc. that have influence on the collaboration process. Simply, the favorable or unfavorable attitudes toward the cooperation with some members may switch the whole collaboration process either to success or disaster. The collaboration process can employ a facilitator or not. Particularly, if the technology plays a significant role in the process, supporting facilitation may be needed. In the CE approach the need for external, or professional facilitation, is minimized and the process execution can transferred to the substance–oriented group members. Generally, the education, previous experience, personal characteristics, available support materials, etc. shape the facilitators’ work, and as a consequence, influences the quality of the collaboration process. As discussed earlier, the collaboration process is not worth planning or evaluating in isolation. There must be a goal toward which the process aims. Primarily the goal and its sub-goals should bet set up before the actual process is initiated, but, of course, some goals can be adjusted during the process. Anyway, the realization level of the goals depends on the quality of the collaboration process. It might be that the early phases of the process determine the achievement of some subgoals, but the realization level of the other goals is depending on the quality of the last phases. Goals can be given to the task, technology, group or facilitator related features. Naturally, there may be hierarchical relations between the goals, some of the goals can be more important than others, some of the goals are expressed explicitly whereas others are more or less implicit, etc. In Figure 2 the structure of our framework with illustrative examples is described.
223
Advancing the Success of Collaboration Centered KM Strategy
Figure 2. A framework for advancing the collaboration centered KM strategy
The framework is centered around the concept of the collaboration process. In the figure the process flow of a typical GSS-aided collaboration process is illustrated (based on Turban et al. 2007). The arrows depict alternative routes that may be planned and taken. Often, after prioritizing the key ideas, additional idea generation follows to elaborate on the key ideas. Sometimes the process may include only idea generation, although eventually the ideation results would be processed further, but maybe by another group of people. It should be noted that part of the input factors are controllable while some are beyond control. We may have influence on the task content or group composition, but the technology, for example, can be given for a certain collaboration situation and time-frame, although at the strategic decisionmaking level even it can be altered. The CE approach is worth considering especially in recurring high-value processes. CE relies on design guidelines including reusable building blocks called thinkLets, which are divided into six collaboration pattern classes of generate, reduce,
224
clarify, organize, evaluate and build consensus (Kolfschoten et al., 2006). The thinkLets (over 70 so far) have been codified by highly professional facilitators to transfer their expertise and skills on facilitating group processes to practitioners. Referring to the tacit knowledge types of Smith et al. (2006) discussed in the introduction, the CE approach encompasses in a way all four types: the transfer of expertise (in form of thinkLets), of best practices (in form of collaboration processes composed of thinkLets), of experiences (in form of codified insights) and of innovation (creation of new knowledge through collaboration processes). Detailed descriptions of CE field studies can be found e.g. from den Hengst et al. (2004), de Vreede et al. (2005) and Bragge et al. (2005a, 2005b, 2007, 2009). In addition, de Vreede et al. (2009) give a brief review of several other CE field studies in the areas of collaborative mission analysis, crisis response training and operational execution, project knowledge elicitation, usability testing, and software requirements negotiation.
Advancing the Success of Collaboration Centered KM Strategy
Depending on the approach chosen in each collaborative situation, the roles of the facilitator can vary. In most straightforward cases, there is no need for a facilitator, e.g. with meetings of established small project teams. But when the benefits of facilitation exceed its costs, there exists several options: using professional (external or internal) facilitators, or practitioners (internal or external) as facilitators. The roles of the facilitator are partially dependent on this choice. Traditionally, both the design and execution tasks belong to professional facilitators, while in the CE approach the execution tasks are transferred to the practitioner. Building on an extensive review of the GSS facilitation literature, Kolfschoten (2007, p. 193) enumerates over 70 sub-tasks to be regarded in the design phase, and over 80 sub-tasks for the execution phase (some tasks are to be regarded in both phases). These include, for example, recognizing stages of a group process, enabling participants to contribute freely, and managing group creativity, anxiety and conflict. Additionally, there are numerous resources outside the GSS domain that offer insights for facilitator roles. For example, Schwarz (2002) builds his ‘skilled facilitator’ approach on nine ground rules that should be endorsed for effective facilitation, e.g. sharing all relevant information, and focusing on interests, not positions. Regarding complex multi-organizational contexts, facilitators have been found to take on extended roles, e.g. arbitrator, referee, or moderator (Ackermann et al. 2005; Bragge et a., 2007). In those contexts, it is very important for the facilitators to ensure that the intervention provides the means for developing a common basis for shared understanding. In our framework the process and outcome perspectives are integrated. Obviously, the framework is context dependent and necessarily general. In each collaboration situation we need to determine its ultimate goal, and assess the process flow, the input factors in each category as well as the subgoals and objectives given to each outcome type. Because it is hard to nominate and measure the
final outcomes of the collaboration process we need to use among others satisfaction measures, as is common when evaluating the success of IS, KMS and GSS.
Success of the collaboration centered Knowledge Strategy The framework discussed above implicitly includes a causal chain from input variables to the final success of the strategy. In Figure 3 the underlying sequence is explicated. Figure 3 is the explication of the framework but at the same time it is a research model. The first main point of the model is that collaboration is seen as a process and that the quality of the process depends on the values of the controllable and uncontrollable input variables. The relationships between input variables and the quality of each collaboration phase are nominated as quality functions. On the other hand, the input variables are classified into task, technology, group and facilitator variables. Instead of the term variable we could use ‘construct’ to depict the multidimensional nature of the input factors. The model also purports that in the beginning of the collaboration effort one or more goals are given, explicitly or implicitly, to the process. Then according to this model, the success of the collaboration centered KM strategy is assessed through the level of the realized goals. The level of realization is believed to depend on the quality of the collaboration process, and the relationships are named ‘realization functions’. Thus, the success is multidimensional and the final success depends on the relative importance of each dimension. The proposed framework and the respective model can be used to evaluate any collaborative efforts or to design those efforts in advance, i.e. to predict the outcomes of a particular collaboration effort. The managerial implications of the model may be evaluated in a form of a question: “In what circumstances will the collaboration centered knowledge strategy succeed?” The answer is in the
225
Advancing the Success of Collaboration Centered KM Strategy
Figure 3. Control span of the collaboration centered knowledge strategy (* includes both controllable and uncontrollable variables)
form: “Assign to the controllable variables such values that the final success is maximized”. Naturally, in order to detail the answer in a particular situation we need to specify the quality, realization, and success functions. We need to clarify how the quality of different collaboration phases are depending on the different input variables, how the goal realizations are depending on the quality of the phases, what goals are important, and the how the goal realizations are profiling the final success of the collaboration centered KM strategy. Next, some initial measures to implement and evaluate the framework are discussed.
PreliMinarY eValuation of tHe fraMeworK general evaluation In this section we evaluate the merits and applicability of the framework based on our years’ experience on collaboration centered KM, which has also influenced the framework’s formation. The framework and its integrated model may be utilized either implicitly by using subjective evaluations of the relations and implications of the different constructs, or it can be utilized to its full capacity. The latter necessitates gathering data systematically from various kinds of collaboration processes, and storing it into a database in order to be able to detect the underlying relationships
226
between the constructs. This approach would enable the creation of one type of best practices for the success of collaboration centered KM. Note that the framework is not purported for simple 2-3 person collaborative situations but for more complex cases that may include tens or even hundreds of collaborators. Although the framework requires historical data or subjective evaluations regarding the future, the strengths of the framework are manifold. Among others, • •
•
it can be employed both to evaluate the past and to predict the future, it covers logically the whole chain from different input factors to the overall success of a knowledge strategy, and it can be used both in theoretical research as well as in practical management.
When planning for collaboration, a varying mixture of questions needs to be considered, and consequently, taken into account when applying the framework. Among others: Goals: ◦
◦
What is the general goal and subgoals for the collaboration? Is there a need to define more specific objectives regarding the goals? What is the level of conflict between the individual and group goals?
Advancing the Success of Collaboration Centered KM Strategy
Task: ◦
◦ ◦ Group: ◦
What is the type of the task? (Generation, choice, negotiation, execution. McGrath, 1984). Does the task recur regularly or is it one-off? Is the task of high-value for the organization?
What is the composition of the group of people (= collaborators) needed to attain the goal? Are they all employees of the organization, or are partners, competitors, customers or endusers involved? ◦ What is the size of the group? (small: 2-5, medium 6-9, large: 10 or more) ◦ Are the collaborators geographically distributed? Are they in different time zones? ◦ Are there notable cultural differences between the collaborators? ◦ Have the group members met before? (No, a few times, an established group). ◦ What is the initial understanding of technology or the “technology readiness” (Parasuraman 2000) of the collaborators? Collaboration process ◦ What is the level of collaboration needed? (Collected work, coordinated work or concerted work. Nunamaker et al. 2001). ◦ How much time there is to plan the process? ◦ What is the ideal mode of the collaboration (same-time – same-place, same-time – different-place, different-time – different-place) Technology: ◦ What type of information systems support each collaboration level? (word processors, spreadsheets,
workflow systems, GSS etc. Chen et al., 2006) ◦ What is the capability level of the GSS tools needed? (Level 1 tools for exchange of information; Level 2 tools to aid in decision-making, e.g. in organizing, modeling, changing, and ranking information. DeSanctis and Gallupe, 1987) ◦ Which collaboration capabilities are needed and what technologies afford them? (Jointly authored pages, streaming technologies, information access tools, or aggregated systems. Mittleman et al. 2008). ◦ What collaborative information systems (CIS) to employ, if any? (E-mail, teleconferencing, videoconferencing, dataconferencing, web-based collaborative tools, proprietary groupware tools, GSS. Bajwa et al. 2003). What kind of CIS the organization already has in use? (‘Decision-room’, webbased GSS, videoconferencing systems etc.) Facilitation: ◦ Is there a need for process (or content) facilitation? (Hire, inhouse, CE practitioner). ◦ Does the problem owner have the necessary individual characteristics and skills needed to be a potential facilitator? Thus, these are the type of issues that should be considered when managing collaboration processes.
case example In this section we demonstrate the application of the framework by way of an example, based on Bragge et al. (2007). In that study, an action research intervention was conducted with a consor-
227
Advancing the Success of Collaboration Centered KM Strategy
tium of 13 Finnish universities that manages and develops a common student information system. The chairman of the consortium’s working committee decided to resort on facilitation support as it appeared that the consortium’s old strategy needed major revising. The amount of member universities had grown from 5 to 13 in a short period of time, and that had complicated largely the consortium’s decision-making processes based on unanimity. Moreover, the European-wide Bologna process regarding university degree reform required major actions for the near future. Thus, the chairman asked the first author of this chapter to design a collaboration process for the consortium’s strategy renewal workshop to be held a few months later in the premises of the chairing university. The strategy development itself is a difficult task, here complicated further by the multi-organizational group composition, thus the facilitator invited another researcher experienced with strategic planning to join the intervention, as well as to other researchers to assist her in the facilitation (hereafter called the facilitator team). At the outset, the facilitator team felt necessary to lean on electronic support technology to ensure efficient and effective collaboration during the six hours that was given for the workshop. GSS have been found to offer unique assistance in strategic planning, which represents a complicated and dynamic group process (Dennis et al. 1997). Moreover, the same time – same-place mode of electronic collaboration has been found to be better fitting than virtual meetings for strategic decision-making tasks. Thus, we planned to utilize the chairing university’s decision-room facilities: a computer class with 25 networked computers equipped with GSS clients (GroupSystems© MeetingRoom). The main facilitator had at the time of the workshop only limited experience on facilitating GSS sessions, aside her normal university faculty duties. Thus, when designing the collaboration process she leaned on the valuable advice from the CE thinkLets manuscript (Briggs and de
228
Vreede, 2001), after having heard the specific sub-goals for the strategy workshop: (1) internal environment analysis regarding the needs of the universities with respect to the common IS, (2) mission statement generation, (3) vision generation, as well as (4) generating strategic goals and the means how to achieve them. The CE facilitation process model including all these phases is provided in the appendix. In this case, instead of transferring the process execution to a practitioner, the CE approach was exploited by a novice facilitator to improve her facilitation skills regarding both in engineering the process design, and when executing it. The CE approach was inevitably able to provide invaluable aid for these tasks. The implementation of the process proved that the selected thinkLets provided the patterns of collaboration as predicted. Moreover, the process recipe may now be easily repeated by other facilitators or even practitioners. The group members (16) from the consortium’s 13 universities were very content with the collaboration process and the results according to the anonymous session feedback survey, and two post-session interviews with the chairman of the working committee. Based on 16 answers on a Likert scale 1-7, the objectives of the session were regarded well achieved (mean 5.50, SD 0.73), the results were very useful (mean 5.69, SD 0.70), and the e-collaboration process helped the participants to focus the discussion on essential matters (mean 5.63, SD 0.89). All of them recommended the use of GroupSystems to others. The GSS benefits that were highlighted were the anonymity, equality, interactivity, efficiency, effectiveness, online voting and documentation. The participants liked the systematic, controlled way of collaborating, and regarded GSS as an ideal tool for a large and heterogeneous group. The ideas collected were considered useful for the development of the consortium strategy as well as for the common information system. The quantity and the quality of the results gained were regarded as superior to those obtained using their conventional strategy
Advancing the Success of Collaboration Centered KM Strategy
development method. Also, huge amounts of time (even months) were saved. Finally, the facilitator team’s situational sensitivity, flexibility as well as expertise on GSS and the substance were appreciated. (Bragge et al. 2007) Besides the CE approach, also the multiorganizational collaborative team (MCT) framework of Ackermann et al. (2005) offered the facilitator team a lot of insights for handling the teamwork dynamics during the session. For example, the main facilitator was once asked to take the role of a legitimate and knowledgeable arbitrator, that is, to give an opinion during the discussions. Moreover, GSS was found to be an excellent instrument for mitigating many of the impediments for MCT co-operation summed up by Ackermann et al. (2005). For example, the novelty of the GSS-aided way of collaborating alleviated the lack of a common history for the group, and guided the participants to look at the strategy neutrally and focusing on the future, not on the past. The anonymity feature of GSS was also extremely valuable. It mitigated the rise of conflicts, complex politics, and power relations, although there were many diverging opinions among the participants and their organizations. Also, the long-term effects related to the strategy making challenges of this kind of a “procedurally just” and fair process should be positive (Eden and Ackermann 2001). Regarding improvements suggested for the process, one participant pointed out that the utilization of electronic communication does not eliminate the need for deep face-to-face discussions and deliberation of the ideas. Thus, the process should be continued with conventional meetings, as was also agreed in the workshop conclusion. Secondly, in all voting-based methods there is a risk that an average option wins, and the wildest options may be automatically discarded, although they may sometimes turn out to be real jewels. However, the facilitator attempted to mitigate this problem through discussions by sorting the results again by the standard deviation showing the
sources for largest disagreements (the ‘Crowbar’ thinkLet). Third, there were divergent opinions of the workshop schedule. Some thought it was too tight while some regarded it as too slow. Our case represents knowledge sharing and creation both at the individual and organizational levels, but also at the inter-organizational level. Besides the electronic idea generation and prioritization, also the opportunity to verbally discuss matters that came up during the collaboration process was regarded very important. The key issues that were discussed were the lack of common terminology in the consortium and the need for common business processes. The workshop also proved to be extremely useful for the newcomers in the consortium in the sense that it allowed them to efficiently gain new knowledge regarding the consortium and its other member universities.
concluSion Knowledge is the capability to make decisions and the primary resource for all organizational transformations. Knowledge exists at various levels, not only at the personal level but also at group and organizational levels. Although the means to share information, communicate and express ourselves have been broadened considerably during the last decades, a lot of relevant knowledge in organizations remains unmined, unshared, and underutilized. Knowledge strategy is a type of resource-based strategy and it is a scheme to do epistemic work in organizational context, that is, to create, convert, share, storage, secure, use, and evaluate knowledge resources in organizational context. In this chapter, we have proposed a framework to advance the success of collaboration centered knowledge management strategy. We have elaborated on previous research on knowledge management, the closely related success models, and on the theories of the collaboration processes. Our framework helps in understanding the variations of the collaboration
229
Advancing the Success of Collaboration Centered KM Strategy
processes, quality ingredients of the phases during the process, and subsequent impacts of the process quality on the goal achievements, and further, on the final success. Since the collaboration centered KM strategy is a process-oriented strategy, the proposed framework needs to be process-oriented, too. In addition to the process, the framework focuses on the four categories of controllable (manageable) and uncontrollable inputs and the respective outcomes. The dynamic process perspective with controllable inputs makes it possible to advance, that is, to manage the collaboration strategy towards the intended goals and final success. From the research point of view the next step is to develop and validate a measurement instrument, collect data and search for the underlying relationships between constructs. This way it is possible to make the most out of the framework and the collaboration centered knowledge management strategy at scientific as well as practical levels.
Barney, J. B. (1986). Organizational culture: Can it be a source of sustained competitive advantage? Academy of Management Review, 11(3), 656–665. doi:10.2307/258317
referenceS
Bragge, J., Merisalo-Rantanen, H., & Hallikainen, P. (2005b). Gathering Innovative End-User Feedback for Continuous Development of Information Systems: A Repeatable and Transferable E-Collaboration Process. IEEE Transactions on Professional Communication, 48(1), 55–67. doi:10.1109/TPC.2004.843298
Ackermann, F., Franco, L. A., Gallupe, R. B., & Parent, M. (2005). GSS for Multi-Organizational Collaboration: Reflections on Process and Content. Group Decision and Negotiation, 14(4), 307–331. doi:10.1007/s10726-005-0317-4 Anson, R., Bostrom, R. P., & Wynne, B. (1995). An Experiment Assessing GSS and Facilitator Effects on Meeting Outcomes. Management Science, 41(2), 189–208. doi:10.1287/mnsc.41.2.189 Bajwa, D. S., Lewis, L. F., & Pervan, G. (2003). Adoption of Collaboration Information Technologies in Australian and US Organizations: A Comparative Study, Proceedings of the 40th Hawaii International Conference on System Sciences. Retrieved April 17, 2009 from http:// www2.computer.org/plugins/dl/pdf/proceedings/ hicss/2003/1874/01/187410017c
230
Bierly, P., & Chakrabarti, A. (1996). Generic Knowledge Strategies in the U.S. Pharmaceutical Industry. Strategic Management Journal, 17(Winter Special Issue), 123-135. Bragge, J., den Hengst, M., Tuunanen, T., & Virtanen, V. (2005a). A Repeatable Collaboration Process for Developing a Road Map for Mobile Marketing. In Proceedings of the the 11th Americas Conference on Information Systems. Retrieved May 4, 2009 from http://aisel.aisnet. org/amcis2005/198 Bragge, J., & Merisalo-Rantanen, H. (2009). Engineering E-Collaboration Processes to Obtain Innovative End-User Feedback on Advanced Web-Based Information Systems. Journal of the Association for Information Systems, 10(3), 196–220.
Bragge, J., Merisalo-Rantanen, H., Nurmi, A., & Tanner, L. (2007). A Repeatable E-Collaboration Process Based on ThinkLets for Multi-Organization Strategy Development. Group Decision and Negotiation, 16(2), 363–379. doi:10.1007/ s10726-006-9055-5 Briggs, R. O., & de Vreede, G. J. (2001). ThinkLets: Building Blocks for Concerted Collaboration, (Version 1.0), Tucson: GroupSystems.com.
Advancing the Success of Collaboration Centered KM Strategy
Briggs, R. O., de Vreede, G. J., & Nunamaker, J. F. (2003). Collaboration Engineering with ThinkLets to Pursue Sustained Success with Group Support Systems. Journal of Management Information Systems, 19(4), 31–64. Carlile, P., & Rebentisch, E. S. (2003). Into the Black Box: The Knowledge Transformation Cycle. Management Science, 49(9), 1180–1195. doi:10.1287/mnsc.49.9.1180.16564 Chen, F., Romano, N., & Nunamaker, J. F. (2006). A Collaborative Project Manamgent Approach and a Framework for Its Supporting Systems. Journal of International Technology and Information Management, 15(2), 1–16. Choo, C. W. (1998). The knowing organization. New York: Oxford University Press. Clawson, V. K., Bostrom, R. P., & Anson, R. (1993). The Role of the Facilitator in ComputerSupported Meetings. Small Group Research, 24(4), 547–565. doi:10.1177/1046496493244007 Cook, S. D. N., & Brown, J. S. (1999). Bridging Epistemologies: The Generative Dance Between Organizational Knowledge and Organizational Knowing. Organization Science, 10(4), 381–400. doi:10.1287/orsc.10.4.381 Cross, R., & Baird, L. (2000). Technology Is Not Enough: Improving Performance by Building Organizational Memory. Sloan Management Review, 41(3), 41–54. de Vreede, G.-J., Briggs, R. O., & Massey, A. P. (2009). Collaboration Engineering: Foundations and Opportunities: Editorial to the Special Issue on the Journal of the Association of Information Systems. Journal of the Association for Information Systems, 10(3), 121–137.
de Vreede, G.-J., Fruehling, A., & Chakrapani, A. (2005). A Repeatable Collaboration Process for Usability Testing. In Proceedings of the 38th Hawaii International Conference on System Sciences, Retrieved April 17, 2009 from http:// www2.computer.org/plugins/dl/pdf/proceedings/ hicss/2005 /2268/01/22680046.pdf DeLone, W., & McLean, E. (1992). Information Systems Success: The Quest for the Dependent Variable. Information Systems Research, 3(1), 60–95. doi:10.1287/isre.3.1.60 DeLone, W., & McLean, E. (2003). The Delone and Mclean Model of Information Systems Success: A Ten-Year Update. Journal of Management Information Systems, 19(4), 9–30. den Hengst, M., & Adkins, M. (2007). Which collaboration patterns are most challenging: A global survey of facilitators. In Proceedings of the 40th Annual Hawaii International Conference on System Sciences. Retrieved May 4, 2009 from csdl.computer.org/comp/proceedings/hicss/2007 /2755/00/27550017b.pdf den Hengst, M., Dean, D., Kolfchoten, G., & Chakrapani, A. (2006). Assessing the Quality of Collaborative Processes. In Proceedings of the 39th Hawaii International Conference on System Sciences, Retrieved April 17, 2009 from http:// www2.computer.org/plugins/dl/pdf/ proceedings/ hicss/2006/2507/01/250710016b.pdf den Hengst, M., van de Kar, E., & Appelman, J. (2004). Designing mobile information services: user requirements elicitation with GSS design and application of a repeatable process. In Proceedings of the 37th Annual Hawaii International Conference on System Sciences, Retrieved April 17, 2009 from http://www2.computer.org/plugins/dl/pdf/ proceedings/hicss/2004/2056/01/205610018c.pdf
231
Advancing the Success of Collaboration Centered KM Strategy
Dennis, A., & Wixom, B. (2001). Investigating the moderators of the group support system use with meta-analysis. Journal of Management Information Systems, 18(3), 235–257.
Huber, G. P. (1984). Issues in the Design of Group Decision Support Sytems. Management Information Systems Quarterly, 8(3), 195–204. doi:10.2307/248666
Dennis, A. R., Tyran, C. K., Vogel, D., & Nunamaker, J. F. (1997). Group Support Systems for Strategic Planning. Journal of Management Information Systems, 14(1), 155–184.
Jennex, M. E., & Olfman, L. (2006). A Model of Knowledge Management Success. International Journal of Knowledge Management, 2(3), 51–68.
DeSanctis, G., & Gallupe, R. B. (1987). A Foundation for the Study of Group Decision Support Systems. Management Science, 33(5), 589–609. doi:10.1287/mnsc.33.5.589 Eden, C., & Ackermann, F. (2001). Group Decision and Negotiation in Strategy Making. Group Decision and Negotiation, 10(2), 119–140. doi:10.1023/A:1008710816126 Fjermestad, J., & Hiltz, S. R. (1999). An Assessment of Group Support Systems Experimental Research: Methodology and Results. Journal of Management Information Systems, 15(3), 7–150. Fjermestad, J., & Hiltz, S. R. (2000). Group Support Systems: A Descriptive Evaluation of Case and Field Studies. Journal of Management Information Systems, 17(3), 112–157. Goodhue, D. L., & Thompson, R. L. (1995). Task-Technology Fit and Individual Performance. Management Information Systems Quarterly, 19(2), 213–236. doi:10.2307/249689 Hansen, M. T., & Nohria, N. (2004). How to Build Collaborative Advantage? MIT Sloan Management Review, 46(1), 22–30.
Jennex, M. E., Smolnik, S., & Croasdell, D. (2007). Towards Defining Knowledge Management Success. In Proceedings of the 40th Hawaii International Conference on System Sciences. Retrieved April 6, 2009 from http://www2.computer.org/ plugins/dl/pdf/proceedings/hicss/2007/2755/00 /27550193c.pdf Jennex, M. E., Smolnik, S., & Croasdell, D. (2008), Towards Measuring Knowledge Management Success, In Proceedings of the 41st Hawaii International Conference on System Sciences. Retrieved April 6, 2009 from http:// www2.computer.org/plugins/dl/pdf/proceedings/ hicss/2008/3075/00 /30750360.pdf Khalifa, M., Yu,A. Y., & Shen, K. N. (2008). Knowledge management systems success: a contingency perspective. Journal of Knowledge Management, 12(1), 119–132. doi:10.1108/13673270810852430 Kivijärvi, H. (2008). Aligning Knowledge and Business Strategies within an Artificial Ba. In Abou-Zeid, E.-S. (Ed.), Knowledge Management and Business Strategies: Theoretical Frameworks and Empirical Research. Hersey, PA: Information Science Reference.
Hansen, M. T., Nohria, N., & Tierney, T. (1999). What’s your strategy for managing knowledge? Harvard Business Review, 77(2), 107–116.
Kline, T. J. B., & McGrath, J.-L. (1998). Development and validation of five criteria for evaluating team performance. Organization Development Journal, 16(3), 19–27.
Holsapple, C. W., & Whinston, A. B. (1996). Decision Support Systems – A Knowledge-Based Approach. Minneapolis, St. Paul: West Publishing Company.
Kline, T. J. B., & McGrath, J.-L. (1999). A Review of the Groupware Literature: Theories, Methodologies, and a Research Agenda. Canadian Psychology, 40(3), 265–271. doi:10.1037/h0086842
232
Advancing the Success of Collaboration Centered KM Strategy
Kolfschoten, G. L. (2007). Theoretical Foundations for Collaboration Engineering, Dissertation, Delft University of Technology, 269. Kolfschoten, G. L., Briggs, R. O., de Vreede, G. J., Jacobs, P. H. M., & Appelman, J. (2006). A Conceptual Foundation of the ThinkLet Concept for Collaboration Engineering. International Journal of Human-Computer Studies, 64(7), 611–621. doi:10.1016/j.ijhcs.2006.02.002 KPMG. (2002/2003). Insights from KPMG’s European Knowledge management Survey 2002/2003. KPMG Knowledge Advisory Services, The Netherlanders. Kulkarni, U., Ravindran, S., & Freeze, R. (2006). A Knowledge Management Success Model: Theoretical Development and Empirical Validation. Journal of Management Information Systems, 23(3), 309–347. doi:10.2753/MIS07421222230311 Lai, J.-Y., Wang, C.-T., & Chou, C.-Y. (2008). How Knowledge Map and Personalization Affect Effectiveness of KMS in High-Tech Firms. In Proceeding of the 41st Hawaii International Conference on System Sciences. Retrieved April 6, 2009 from http://www2.computer.org/plugins/dl/pdf / proceedings/hicss/2008/3075/00/30750355.pdf Liebman, J. S. (1998). Teaching Operations Research: Lessons from Cognitive Psychology. Interfaces, 28(2), 104–110. doi:10.1287/inte.28.2.104 Lin, H.-F. (2007). A stage model of knowledge management: an empirical investigation of process and effectiveness. Journal of Information Science, 33(6), 643–659. doi:10.1177/0165551506076395 Lin, T., & Huang, C. (2008). Understanding knowledge management system usage antecedents: An integration of social cognitive theory and task technology fit. Information & Management, 45(6), 410–417. doi:10.1016/j.im.2008.06.004
Lindsey, K. (2002). Measuring Knowledge Management Effectiveness: A Task-Contingent Organizational Capabilities Perspective, Eighth Americas Conference on Information Systems, 2085-2090. Lorge, I., Fox, D., Davitz, J., & Brenner, M. (1958). A survey of studies contrasting the quality of group performance and individual performance, 19201957. Psychological Bulletin, 55(6), 337–372. doi:10.1037/h0042344 Mäki, E. (2008). Exploring and Exploiting Knowledge. Research on Knowledge Processes in Knowledge-Intensive Organizations, (Doctoral Dissertation). Helsinki University of Technology. McGrath, J. E. (1984). Groups: Interaction and performance. Englewood Cliffs, NJ: PrenticeHall. Mintzberg, H. (1978). Patterns of Strategy Formulation. Management Science, 24(9), 934–948. doi:10.1287/mnsc.24.9.934 Mintzberg, H., & Waters, J. A. (1985). Of Strategies, Deliberate and Emergent. Strategic Management Journal, 6(3), 257–272. doi:10.1002/ smj.4250060306 Mittleman, D. D., Briggs, R. O., Murphy, J., & Davis, A. (2008). Toward a Taxonomy of Groupware Technologies, Presented at 14th Collaboration Researchers’ International Workshop on Groupware, Retrieved April 17, 2009 from http://ihop.typepad.com/docs/ criwg2008.pdf and Appendix from http://ihop.typepad.com/docs/ webfacilitationtools.xls. Muhammed, S., Doll, W. J., & Deng, X. (2008). Exploring the Relationships among Individual Knowledge Management Outcomes. In Proceedings of the 41st Annual Hawaii International Conference on System Sciences, Computer Society Press.
233
Advancing the Success of Collaboration Centered KM Strategy
Murthy, U. S., & Kerr, D. S. (2000). Task/Technology Fit and the Effectiveness of Group Support Systems: Evidence in the Context of Tasks Requiring Domain Specific Knowledge. In Proceedings of the 33rd Annual Hawaii International Conference on System Sciences. Computer Society Press. Myers, P. (Ed.). (1996). Knowledge management and organizational design. Boston: Butterworth– Heinemann. Niederman, F., Beise, C. M., & Beranek, P. M. (1996). Issues and Concerns about ComputerSupported Meetings: the Facilitator’s Perpsective. Management Information Systems Quarterly, 20(1), 1–22. doi:10.2307/249540 Nissen, M., & Jennex, M. (2007). Toward Multidimensional Conceptualization of Knowledge. In Jennex, M. E. (Ed.), Knowledge management in Modern Organizations (pp. 278–284). Hershey, PA: Idea Group Publishing. Nonaka, I., & Takeuchi, H. (1995). The Knowledge Creating Company. New York: Oxford University Press. Nunamaker, J. F., Jr., Briggs, R. O., & Romano, N., Jr. (2001). A Framework for Collaboration and Knowledge Management. In Proceeding of the 34th Hawaii International Conference on System Sciences, Retrieved April 17, 2009 from http:// www2.computer.org/plugins/dl/pdf/proceedings /hicss/2001/0981/01/09811060.pdf Nunamaker, J. F., Dennis, A. R., Valacich, J. S., Vogel, D. R., & George, J. F. (1991). Electronic Meeting Systems to Support Group Work. Communications of the ACM, 34(7), 40–61. doi:10.1145/105783.105793 Parasuraman, A. (2000). Technology Readiness Index (TRI). Journal of Service Research, 2(4), 307–320. doi:10.1177/109467050024001 Penrose, E. T. (1959). The theory of the growth of the firm. New York: Wiley and Sons.
234
Pinsonneault, A., & Kraemer, K. L. (1989). The Impact of Technological Support on Groups: An Assessment of the Empirical Research. Decision Support Systems, 5(2), 197–216. doi:10.1016/0167-9236(89)90007-9 Polanyi, M. (1962). Personal Knowledge. Chicago: University of the Chicago Press. Savage, C. M. (1996). Fifth generation management: co-creating through virtual enterprising, dynamic teaming and knowledge networking. Boston: Butterworth-Heinemann. Scharmer, C. O. (2001). Self-transcending Knowledge: Organizing Around Emerging Realities. In Nonaka, I., & Teece, D. J. (Eds.), Managing Industrial Knowledge (pp. 68–90). London: Sage Publications. Schwarz, R. (2002). The Skilled Facilitator. A Comprehesive Resource for Consultants, Facilitators, Managers, Trainers and Coaches. San Francisco: Jossey-Bass. Seddon, P. A. (1997). Respecification and Extension of the Delone and Mclean Model of Is Success. Information Systems Research, 8(3), 240–153. doi:10.1287/isre.8.3.240 Smith, H. A., McKeen, J. D., & Singh, S. (2007). Tacit Knowledge Transfer: Making It Happen. Journal of Information Science & Technology, 3(3), 50–72. Steiner, I. D. (1972). Group Process and Productivity. New York: Academic Press. Stenmark, D. (2001). Leveraging Tacit Organizational Knowledge. Journal of Management Information Systems, 17(3), 9–24. Sveiby, K. E. (1997). The New Organizational Wealth – Managing and Measuring Knowledgebased Assets. San Francisco: Berrett-Kehler Publishers, Inc.
Advancing the Success of Collaboration Centered KM Strategy
Tapscott, D. (2006). Winning with the Enterprise 2.0, IT and Collaborative Advantage, New Paradigm, Retrieved April 17, 2009 from http:// newparadigm.com/media/Winning_with_the_ Enterprise_2.0.pdf Thomas, B. D. (2006). An Empirical Investigation of Factors Promoting Knowledge Management System Success. Doctoral Dissertation, Texas Tech University, Retrieved April 17, 2009 from http:// etd.lib.ttu.edu/theses/available/etd-07072006105657/unrestricted/ Thomas_Bobby_Diss.pdf Tsoukas, H., & Vladimirou, E. (2001). What is Organizational Knowledge? Journal of Management Studies, 38(7), 973–993. doi:10.1111/14676486.00268 Turban, E., Aronson, J. E., Liang, T.-P., & Sharda, R. (2007). Decision Support and Business Intelligence Systems. Upper Saddle River, NJ: Prentice Hall. Venkatraman, N. (1989). Strategic orientation of business enterprises. Management Science, 35(8), 942–962. doi:10.1287/mnsc.35.8.942 Wernerfelt, B. (1984). A Resource-Based View of the Firm. Strategic Management Journal, 5(2), 171–180. doi:10.1002/smj.4250050207 Wu, J., & Wang, Y. (2006). Measuring KMS Success: A respecification of the DeLone and McLean’s model. Information & Management, 43(7), 728–739. doi:10.1016/j.im.2006.05.002 Zack, M. H. (1999). Developing a Knowledge Strategy. California Management Review, 41(3), 125–145. Zigurs, I., & Buckland, B. K. (1998). A Theory of Task/Technology Fit and Groups Support Systems Effectiveness. Management Information Systems Quarterly, 22(3), 313–334. doi:10.2307/249668
Zigurs, I., Buckland, B.K., Connolly, J.R., & Wilson, E.V. (1999). A Test of Task-Technology Fit Theory of Group Support Systems. The Data Base for Advances in Information Systems, 30(3,4), 34-50.
KeY terMS and definitionS Collaboration Engineer: Collaboration Engineer designs and documents collaboration processes that can be readily transferred to a practitioner. (Kolfschoten et al. 2006) Collaboration Engineering (CE): Collaboration Engineering is an approach that designs, models and deploys repeatable collaboration processes for recurring collaborative tasks that are executed by practitioners using facilitation techniques and technology. (Kolfschoten et al. 2006) Collaboration Process: Collaboration process is built as a sequence of facilitation interventions that create patterns of collaboration; predictable group behavior with respect to a goal. (Kolfschoten et al. 2006) Facilitator: Facilitator both designs and conducts a dynamic process that involves managing relationships, tasks and technology, as well as structuring tasks and contributing to the effective accomplishment of the meeting’s outcome. (Kolfschoten et al. 2006) Group Support Systems (GSS): Group Support Systems is a suite of collaborative software tools that can be used to focus and structure a team’s deliberation while reducing cognitive costs of communication and information access and minimizing distraction among teams working collaboratively toward a goal. (Briggs et al. 2003) Knowledge Sharing: Knowledge sharing is a process through which knowledge is exchanged among individuals, inside or between groups or organizations. Knowledge: Individual or organizational ability to make decisions.
235
Advancing the Success of Collaboration Centered KM Strategy
Tacit Knowledge: Tacit knowledge is the hidden capability of a person and is difficult to articulate or to transform to another person.
236
ThinkLet: ThinkLet is a named, packaged facilitation technique that creates a predictable, repeatable pattern of collaboration among people working toward a goal. (Kolfschoten et al. 2006)
Advancing the Success of Collaboration Centered KM Strategy
aPPendix: Figure 4. CE Facilitation Process Model for a multi-organizational strategy development process (modified from Bragge et al. 2007)
237
238
Chapter 13
The Relevance of Integration for Knowledge Management Success: Towards Conceptual and Empirical Evidence Alexander Orth Accenture, Germany Stefan Smolnik EBS University of Business and Law, Germany Murray E. Jennex San Diego State University, USA
abStract Many organizations pursue knowledge management (KM) initiatives with different degrees of success. One key aspect of KM often neglected in practice is following an integrated and holistic approach. Complementary, KM researchers have increasingly focused on factors that determine KM success and examined whether the metrics used to measure KM initiatives are reasonable. In this article, the importance of integration issues for successful KM is analyzed by means of a case study of a KM initiative at an international consulting company. The investigations demonstrate the importance of an integrated KM approach – an integrated view of KM strategy, KM processes, KM technology, and company culture – to ensure KM success. DOI: 10.4018/978-1-60566-709-6.ch013
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
The Relevance of Integration for Knowledge Management Success
introduction and oVerView Subject and Purpose of the chapter Knowledge management (KM) has progressed from an emergent concept to an increasingly common function in business organizations over the past 20 years. Intense competition, fickle consumers, shorter product life cycles, and globalization are some of the driving forces that have led to increased inspection of the usage, application, and leveraging of knowledge in organizations. Successful KM is expected to have a positive influence on a company’s performance and effectiveness. It consists of critical enablers, such as employee training, teamwork, and performance measurement. This leads to the first observation: KM is crucial for a company to succeed. Successful KM depends on the achievement of critical success factors that based on supporting conditions. Although KM systems (KMS) are shown to provide benefits to organizations, they have a high chance of failure due to both technical and IT-related factors, as well as KM-related cultural, behavioral, and strategic factors – similar to many other types of information systems (IS). Problems experienced in KM initiatives are assumed to be the result of one or more of the following three factors: 1.
2.
A focus on the technological dimension of KM (i.e. KMS), together with a lack of attention to the social dimension (e.g., organizational culture). The absence of a clearly defined purpose and value for the business. In this context, a key requirement for realizing the business value of KM is the institutionalization of KM practices and systems into people’s natural work flow.
3.
KM frameworks’, concepts’ and systems’ lacking adoption to the specific requirements of corporate contexts. Given its focus on people and their interactions, KM is intrinsically highly context specific. Each organizational setting poses its own challenges for successful KM. These aspects lead to the second observation:
An integrated and holistic KM initiative, as well as the complete embedding of KM in organizations will be essential for KM success. Based on both observations, the overall goal of this article is to analyze and investigate coherences, connections, and interdependencies between KM success and an integrated and holistic view of the subject area. The corresponding research question can be formulated as follows: To what extent do KM success factors that are accepted in literature support an integrative perspective and does such a perspective account for KM success?
research approach and Structure Qualitative case study research was employed for this study. Section 2 introduces Riempp’s architecture for integrated KMS and its performance measurement system. Section 3 is an overview of KM and KMS success. It also discusses the success assessment framework of Jennex and Olfman. Section 4 compares the key performance indicators of Riempp’s architecture for integrated KMS and the critical success factors of Jennex and Olfman’s success assessment framework, using the case study findings. Section 5 concludes the article by outlining the findings, limitations, and further research areas.
239
The Relevance of Integration for Knowledge Management Success
foundationS on integrated KMS arcHitectureS
to integrate the knowledge within a system in order to use it efficiently across the organization.
background of KMS approaches
riempp’s architecture for integrated KMS
Alavi/Leidner define KM as a “systemic and organizationally specified process for acquiring, organizing, and communicating knowledge of employees so that other employees may make use of it to be more effective and productive in their work”. KM systems are clarified as “IT-based systems developed to support and enhance the organizational processes of knowledge creation, storage/retrieval, transfer and application”. Alternatively, Jennex took a holistic view of a KMS as a system created by combining content, organizational processes, users and technical solutions to facilitate the capture, storage, retrieval, transfer, and reuse of knowledge to improve organizational and individual decision-making. This holistic view, which integrates people, process, and technology, is a Churchman view of KM that allows the KMS to take the form required to accomplish KM goals. Two kinds of KMS implementations are used to address the comparison between KM approaches: an approach based on infrastructure or generic systems (KM in the large) and an approach based on processes or tasks (KM in the small). The latter perspective mainly focuses on employees’usage of knowledge in a task, process, or project that already possesses a common context of understanding in order to improve the effectiveness of that task, process or project. On the other hand, the former perspective assumes that users do not have a common context of understanding. It concentrates on the construction of a KMS which supports KM processes throughout the organization and which captures more knowledge contexts. The integrated KMS is designed to fit both aspects. The approach based on processes or tasks supports specific tasks and processes, whereas the infrastructure or generic-system-orientated perspective helps
240
Riempp’s architecture for integrated KMS was developed by combining desk research, multiple case studies, and action research. The field research involved a KM initiative at PricewaterhouseCoopers, as well as studies and workshops with ten organizations in the context of the “Customer Knowledge Management” competence centre at the University of St. Gallen. Riempp’s architecture for integrated KMS consists vertically of three layers (strategy, process, and system) and horizontally of four pillars (content, competence, collaboration, and orientation). All these elements are influenced by the organizational culture (Figure 1). The strategy layer is composed of the business strategy, the KM strategy and KM goals, as well as the measurement system. In the latter, metrics are defined to monitor the progress of the KM initiatives. The measurement system of the integrated KMS architecture will be discussed in more detail in the next section. The process layer consists of business and support processes. KM processes constitute support processes and are subject to the KM strategy. Employees with specific KM roles execute the KM processes by accomplishing specific KM activities. The system layer describes the integrated KMS, which is ideally accessed through a portal. The KMS supports the KM processes and is composed of the following four functional pillars: 1.
2.
Content relates to the management of information objects, its context, and the management of content itself. Collaboration refers to the identification, exchange, development, and usage of knowledge.
The Relevance of Integration for Knowledge Management Success
Figure 1. Overview of Riempp’s integrated KMS architecture
3.
4.
Competence addresses all aspects of individual and collective competencies in an organization. Orientation is composed of all search, navigation, and administration functions required in the areas of content, competence, and collaboration.
The architecture for integrated KMS distinguishes between different dimensions of integration. The elements of the architecture described above should be integrated along the four key dimensions: 1.
2.
Integration with the culture is the central dimension of integration. It is aligned to norms, values, and paradigms that need to be reflected when configuring an integrated KMS. Vertical integration between the three layers firstly indicates that KM processes should be
3.
4.
in line with the KM strategy and, secondly, that the configuration of the strategy and process layer influences the design of the system layer. Horizontal integration refers to the integration between the four pillars of the architecture. It can be achieved on the system layer as well as on the process layer level. Integration of the KM processes and roles in the KMS finally means that “the KMS should be designed in order to support employees in the execution of their roles within business and support processes as well as related KM processes”.
the Measurement System of riempp’s integrated KMS architecture The implementation of this vision of Riempp’s architecture for integrated KMS can be allego-
241
The Relevance of Integration for Knowledge Management Success
Figure 2. Detailed view on the strategy layer (modified)
rized on the basis of a basic KM process model consisting of the following four steps: 1.
2. 3. 4.
Create knowledge transparency (about the knowledge that already exists in an organisation, knowledge managing processes, and respective IS). Promote knowledge exchange. Control knowledge development. Ensure knowledge efficiency.
Within step 4, the target achievement of the previous steps should be verified by taking quality improvements as well as time and cost reductions into consideration. In order to fulfil this requirement, the architecture provides a measurement system on the strategy layer. The achievement of specific KM goals can be verified by means of key performance indicators (KPIs) and respective index values (IVs). The KM strategy similarly refers to the four pillars of content, collaboration, competence and orientation, as well as to the
242
culture of the organization. The KM goals, KPIs and IVs are subordinate to the KM strategy and mostly only refer to single pillars or to the culture (see Figure 2). Riempp defines a total of 78 KPIs which, according to an integrated and holistic view, refer to the different dimensions of integration within the architecture (see Figure 3). This means that Riempp’s KPIs verify the integrational success or successful integration of KMS. To conclude this section, three meaningful examples that illustrate how well these KPIs reflect integration and verify integration success are briefly described: KPI 22 (“Clear competence management goals”) is an example of how a KPI verifies the success of vertical integration with the competence management goals formulated by the KM strategy and the KM processes constructed accordingly. KPI 14 (“Information objects are ideally stored in an integrated, database-based, information memory, which is applicable across all
The Relevance of Integration for Knowledge Management Success
Figure 3. Overview of Riempp’s 78 key success factors Layer
System
Content
Process
Pillar
Key Performance Indicator ID
Description
1
Convenient integration of information objects in task execution
2
Sufficient knowledge of preocess involved persons about process flows
3
Simple, fast, and flexible execution of content management processes exclusive of unnecessary barriers
4
Creation and preservation of incentives (e.g. in form of rewards)
5
Feedback opportunities between users and authors
6
Chance to extend the target group over multiple stages in order to protect confidentiality and property rights
7
Disassociation of active, relevant information objects from non-active, irrelevant information objects
8
Comfort and clarity of user interface
9
Agile usage of authors so that searching employees can find content easily and get motivated to become authors themselves
10
Sufficient knowledge of users about the operation and handling of Content Management functions (e.g. by trainings)
11
Adequate selection of users in order to avoid an information overload (e.g. by taxonomy-based classification and selection
12
Preferably rich context development (e.g by rich text formatting, grouping, linking, etc.)
13
Relevance, authenticity, timliness, and usefulness of localized information objects
14
information objects are ideally stored in an integrated, datebase-based, information memory, which is applicable across all platforms
15
Integratibility with other applications by standardized interfaces
16
Comfortable creation and revision of information objects in the daily work environment using familiar tools (e.g. WYSIWYG)
17
Disassociation of content, structure, presentation and application logic
18
Rendering of all possible file formats for various clients
continued on following page
platforms.”) is an example for the verification of the successful horizontal integration on the system layer. For example, an information object can be generated (content pillar) and used in a collaboration room (collaboration pillar) afterwards. KPI 1 (“Convenient integration of information objects in task execution”) illustrates how a KPI verifies integration of the KM processes and roles within the KMS. Figure 4 gives a detailed view on the relationship of KPIs and the different integration dimensions.
KM/KMS SucceSS and critical SucceSS factorS foundations Success is basically understood as the achievement of goals, with goals defined as prospective aspired states. In the business management context, profit usually constitutes the supreme goal. This single goal does, however, not provide sufficient guidance for an organization to grow and develop. It ultimately needs to accommodate a spectrum of goals, including the goals of KM initiatives.
243
The Relevance of Integration for Knowledge Management Success
Figure 3. continued Layer
System
Competence
Process
Pillar
Key Performance Indicator ID
Description
19
Guard against fears of “a glassy employee” by a definite authorization system and comprehensive information
20
Sufficient knowledge of process involved persons about the process flows and the handling of competence management functions
21
Benefit and added value is distinguishable for all involved persons (e.g. by eased contacting or improved development opportunities
22
Clear competence management goals (e.g. improvement of human resources development, creation process flexibility, promotion of innovation)
23
Comprehensive top management support
24
Early involvement of employee representatives
25
Creation and preservation of maintenance processes (e.g. by target agreements and appraisals)
26
Securing of reliability of data information by monitoring and examination
27
Timeliness of elements contained in the competence registry
28
Easy contact opportunities between searching employees and competences
29
Comprehensive change management
30
Applicability of elements contained in the competence registry
31
Comfortable navigation, search, and analysis options as well as effective visualization
32
An active usage of the competence registry enhances the incentives for maintenance and causes more timely and applicable entries
33
Sufficient knowledge of users about the operation and handling of competence management functions (e.g. by trainings)
34
Back-end integration with existing human resources management systems in order to avoid inconsistencies
35
Front-end integration with systems of the daily work environment
36
Multi-level authorization system
37
Active contact between searching employees and competences
continued on following page
The achievement of objectives and the aligned successful completion of a KM initiative result in KM success. Jennex et al. define KM and KMS success as “a multidimensional concept. Each includes capturing the right knowledge, getting the right knowledge to the right user, and using this knowledge to improve organizational and/ or individual performance. KM success is measured using the dimensions of impact on business processes, impact on strategy, leadership, and knowledge content”.
244
This paper takes the position that KMS success has a direct effect on KM success, making the terms interchangeable in the rest of the article. Success factor research explicitly focuses on analyzing factors that influence success by defining performance metrics to make the influence of these factors measurable and comparable. The term success factor traces back to Daniel, who used the perception for the first time in the IS context. It was afterwards broadened by Rockert to business-related aspects: “Critical success factors thus are, for any business, the limited number of areas in which results,
The Relevance of Integration for Knowledge Management Success
Figure 3. continued Layer
System
Collaboration
Process
Pillar
Key Performance Indicator ID
Description
38
Continuous engagement of role models (e.g. moderators, team officers, sponsors, etc.)
39
Acceptance and encouragement of the top management
40
Community goals are clearly formulated and consistent with organizational goals
41
Support of influential sponsors
42
Active attendance of popular experts acting as role models and precursors
43
Optional attendance and high intrinsic motivation of employees
44
Free spaces for formation, collaboration, documentation, and reflection
45
Securing the transmission and reutilization of results
46
Continuous virtual and physical meetings
47
Convenient support of information and communication systems
48
Intuitive and comfortable handling of user interface
49
Active usage by role models and antetypes (e.g. executive managers) in order to have a multiplication effect
50
Seamless integration of community management functions into the daily work environment
51
Sufficient knowledge about the operation and handling of community management functions (e.g. by trainings)
52
Continuous engagement of role models (e.g. moderators, team officers, sponsors, etc.)
53
High network capacity
54
An integrated user interface
55
Users are as possible always online in virtual rooms
56
Automatic adaption of available functions to connected hardware and differing network capacities
continued on following page
if they are satisfactory, will ensure successful competitive performance for the organization. They are the few key areas where things must go right for the business to flourish. If results in these areas are not adequate, the organization’s efforts for the period will be less than desired”.
the Jennex/olfman KM Success assessment framework KM/KMS success measurement is crucial from an organizational as well as an academic perspective, as the evaluation of KM initiatives is essential to understand how KMS should be built and implemented. Several KM/KMS success/ effectiveness models have been proposed in order to support
the successful execution of KM initiatives and ensure KM/KMS success. Jennex and Olfman developed a model assessment framework based on comparing existing KM/KMS success models to KM/KMS success factors. It determined the degree to which the models have a theoretical foundation, as well as whether the models could be applied to both approaches (the one based on process and tasks, as well as the one based on infrastructure and generic systems) in order to implement a KMS. In the following two sections, the main results of Jennex and Olfman’s research will be highlighted.
245
The Relevance of Integration for Knowledge Management Success
Figure 3. continued Layer
System
Orientation
Process
Pillar
Key Performance Indicator ID
Description
57
Adequate compromise between simplicity and clarity of navigation and search vs. depth and breadth orientation guides
58
Unerring illustration of the established language use including sufficient terminological accuracy
59
Consistent use of taxonomy according to classification, navigation, and creation of search indices
60
Involvement of all user groups in order to define targets
61
Glossary and taxonomy are closely restricted to central terms (in order to avoid a technological overload)
62
Convenient integration into information systems
63
Periodically passing maintenance processes for adopting the dynamical development of language use
64
Integration of standardized terminologies (e.g. for specific industries)
65
Continuous and well arranged configuration of layout and navigation
66
Speed of screen composition and performance of functions
67
Continous examination and actualization of layout and navigation
68
Convenient preparation of search results
69
Comfortable classification of information objects (as a basis for a attribute-based inducing and search)
70
Appropriate pull-personalization for all users and push-personalization options for advanced users
71
Adequate usage of search engines for dynamically generated navigation structures, topic maps, taxonomy extracts and taxonomy maintenance
72
Automatic link control for the correction of broken links
73
Navigation and search consider multiple languages
74
Integrated search for content, competences, and collaboration rooms
75
Centralization of different search indices in order to perform comprehensive search processes
76
Search engines search in connected sources
77
Marksmanship and speed of search functions
78
Singular authentication for all integrated applications (single sign on)
KM/KMS Success factors The current KM literature contains reams of studies and research work that address and deal with KM/KMS success factors. Jennex and Olfman constructed a critical success factor (CSF) framework by reviewing the existing literature. Several studies that focus on KM/KMS success were found and a total of 78 KM initiatives or organizations were investigated. They identified success factors that were mentioned in the literature, combined them into composite CSFs. and ranked the composite CSFs according to the number of authors mentioning the factors.
246
The outcome was a set of 12 KM/KMS CSFs. CSFs SF1 to SF4 are considered the key CSFs, as they were mentioned in more than 50% of the investigated success factor studies. Figure 5 lists the set of CSFs in their rank order (SF1 to SF4 are highlighted by the red frame).
KM/KMS Success Models Theoretical or process-orientated success models classify success in a broader context in order to also encompass causal connections, indirect impacts, and back coupling. Current KM literature mentions several success models, for example,
The Relevance of Integration for Knowledge Management Success
Figure 4. Relationship between KPIs and the dimensions of integration
247
The Relevance of Integration for Knowledge Management Success
Figure 5. The 12 success factors of Jennex and Olfman
the KM Value Chain of Bots/De Bruijn, the KM Success Model of Massey, Montoya-Weiss and O’Driscoll, the Lindsey KM Effectiveness Model, the KMS Success Model of Maier, and Cooper’s Evolutionary Model for KM Success. Additionally, Jennex and Olfman themselves present a KMS Success Model that is based on the DeLone/McLean IS Success Model. The 12 CSFs of Jennex and Olfman can be applied to the various success models to a greater or lesser extent. Referring to the top four success criteria, the KM Value Chain, Lindsey’s KM effectiveness model, Maier’s KMS Success Model as well as the Evolutionary Model for KM Success of Cooper are not as good reflecting the observed data as the KM Success Model of Massey et al. and the KMS success model of Jennex/Olfman. The only difference between the model of Jennex/Olfman and the model of Massey et al. is SF5 “culture”. Because SF5 would be the next most important success factor, the Jennex Olfman KM success model is considered the best fit and will be used in
248
the rest of this paper. The results of the comparison of KM/KMS success factors and KM/KMS success models are stated through Figure 6.
tHe releVance of integration for KM/KMS SucceSS a comparison of Success factors and Key Performance indicators The Jennex/Olfmann KM success model meets the requirements of both the KM approach based on tasks and processes and the one based on infrastructure and generic systems, which is the crucial and vital principle of an integrated KMS. The success model also has a theoretical basis – the DeLone/McLean IS Success Model. Finally, the Jennex/Olfman success model allegorizes the KMS CSFs the best. It is suggested that one reason for the close fit between the CSFs and the Jennex/Olfman success model is that both
The Relevance of Integration for Knowledge Management Success
Figure 6. KM/KMS Success Models versus KM/KMS Success Factors
KM approaches are addressed. In other words, it supports the integration of both perspectives. This again suggests that integration aspects and KM/KMS success are interlocked. The extent to which the 12 CSFs and the Jennex/Olfman model account for integration aspects will be examined by comparing the 78 KPIs of Riempp’s measurement model to the 12 CSFs. Riempp has further classified his KPIs into one of the eight architecture interfaces (Figure 7). Each KPI of Riempp’s architecture was verified as to whether it could be allocated to none, one, or more than one of the Jennex and Olfman CSFs. A total of 76 of 78 critical KPIs could be assigned to one, and frequently to two or three of the Jennex and Olfman CSFs. Figure 8 illustrates this mapping graphically.
discussion of conceptual findings Based on this comparison of CSFs to KPIs, the following main findings can be derived: 1.
There are definite interdependencies between the Riempp KPIs and the Jennex and Olfman CSFs. A more detailed analysis of the assignment results indicates that certain measures have to be executed in order to achieve the 12 Jennex and Olfman CSFs. These actions are in turn reflected in the 76 Riempp KPIs.
On the one hand this means the 12 Jennex and Olfman CSFs can be broken down into 76 smaller KPI elements that represent the measures that have to be introduced. On the other hand, the 76 KPIs
249
The Relevance of Integration for Knowledge Management Success
Figure 7. Classification of KPIs into the architecture for integrated KMS
Figure 8. Graphical illustration of the key performance indicator assignment
of Riempp’s architecture for integrated KMS can be accounted for by the 12 CSFs.
250
In order to achieve, for instance, SF4 of Jennex/ Olfman’s model (“Motivation and commitment of users, including incentives and training”), several
The Relevance of Integration for Knowledge Management Success
Figure 9. Quantitative illustration of the key findings of the assignment
of Riempp’s KPIs need to be embraced. Incentive systems have to be implemented (KPI 4), a simple transfer of knowledge between competences and searching employees needs to be ensured (KPIs 2, 5, 9, 28, 37), experts and KM roles have to engage themselves (KPIs 42, 49, 52, 55), training and further education need to conducted (KPIs 10, 20, 33, 51), and so forth. Going a step further, the comparison implies that the achievement of the 12 Jennex/Olfman CSFs results from the achievement of the 76 Riempp KPIs. Consequently, the 76 KPIs incorporating integration success need to be attained to achieve KMS success. 2.
An extensive amount of Riempp’s KPIs can be clearly assigned to either SF1, SF2, SF3, SF4, SF5, SF8, or SF10. The KPIs that can be allocated to SF1 and SF10 (technical success factors) refer to horizontal integration, while those that can be assigned to SF2, SF3
and SF8 (strategic success factors) refer to vertical integration. Finally, the KPIs allotted to SF4 and SF5 (cultural and personal success factors) apply comparably to horizontal integration, integration of KM processes and roles in the KMS, as well as to cultural integration. The other Riempp KPIs, those which could be assigned to SF6, SF7, SF9, SF11, and SF12, could not be grouped as precisely. However, all of the remaining KPIs also refer to the different dimensions of integration. Based on these results, it can be stated that the 12 Jennex/Olfman CSFs correspond more or less equally to the different integration dimensions of Riempp’s architecture for integrated KMS. Figure 9 provides a more detailed view of the assignment of the relevant success factors to the different dimensions of integration. The first three columns of the table refer to the 12 Jennex and Olfman CSFs. Column 1 shows the ID of each success factor, column 3 describes each factor
251
The Relevance of Integration for Knowledge Management Success
roughly, and column 2 illustrates how the Jennex/ Olfman CSFs were grouped based on the outcome of the comparison (compare main finding 2). The following columns of the table refer to Riempp’s architecture of integrated KMS. Column 4 shows which Riempp KPIs were assigned to which Jennex/Olfman CSFs and column 5 outlines the number of assigned factors. Finally, columns 6 to 9 indicate to which specific dimension of integration the single Jennex/Olfman CSFs and, hence, the assigned Riempp KPIs refer. The columns 6, 7, 8, and 9 basically legitimate the grouping of CSFs in column 2. The results of the comparison show that achievement of the 12 Jennex/Olfman CSFs results from an achievement of the 76 Riempp KPIs. The logical conclusion is that organizations need to cope with Riempp’s KPIs in order to attain KM/KMS success. The fact that the KPIs of Riempp’s architecture for integrated KMS largely indicates integration success leads to the following basic assumption: Integrated KM determines KM/KMS success.
observations from an international consulting company case This case study was conducted in the context of a merger between two international consulting firms. A KM initiative was launched within the scope of the post-merger integration activities at the acquired company. The primary focus of the initiative was the introduction and announcement of new KMS functionalities and tools, as well as the integration and adjustment of existing KM structures. The measurement and evaluation of the success of the initiatives are ideally suited to verify the assumption that integrated KM determines KM/KMS success. The verification process consists of two consecutive steps: Firstly, the achievement of Riempp’s KPIs was verified by means of a structured survey and a structured interview. The survey consisted of 41
252
questions, mainly focused on cultural, personal, and strategic success factors. The questionnaire was distributed to 50 employees directly involved in the KM initiative and 17 analyzable questionnaires were returned (a return rate of 34%). Both survey and questionnaire can be viewed in the appendix (Table 1 and Table 2). The structured interview was conducted with the chief technology architect of the acquired company. It consisted of 25 questions and focused on technical success factors. The survey as well as the interview dealt with those of Riempp’s KPIs that can be assigned to CSFs SF1, SF2, SF3, SF4, SF5, SF8 and SF10 – the ones that can be described as being either a technical, strategic or cultural and personal success factor. The main results of the first step are summarized in Figure 10. The results lead to three assumptions: 1. The technical success factors were achieved due to horizontal integration on the system layer. 2. The cultural and personal success factors were achieved due to horizontal and cultural integration, as well as to the integration of the KM processes and roles in the KMS. 3. The strategic success factors were not achieved due to a lack of vertical integration across the three layers. These three assumptions were validated by five semi-structured interviews with the initiators and key managers of the KM initiative (the interview guide can be examined through table 3 in the appendix). This constitutes the second step of the case study. The results and insights of the first step were investigated and discussed in more detail. All questions focused more or less equally on the following factors:
The Relevance of Integration for Knowledge Management Success
Figure 10. Attainment degree of the success factors of Jennex/Olfman’s model
(1) Technical aspects (especially regarding the integration of the KMS into the corporate portal). (2) Personal and cultural aspects (especially regarding the horizontal integration of KM processes, the integration of the KM processes with the KMS, and the operational and organizational structure). (3) Strategic aspects (especially regarding the transparency and communication of the KM strategy and KM goals, as well as the knowledge structure). The elementary and most meaningful results of the verification of the three assumptions are discussed below: 1.
Horizontal integration on the system layer was achieved by integrating the IT infra-
structure along the four horizontal pillars (content, collaboration, competence, and orientation) of Riempp’s architecture. Data storages are integrated with each other per pillar, thus allowing the standardization of diverse applications. On the application level, integration basically appears in the complexity of internal and external applications. An integrated regulation framework can be ensured by a standardized and continuously used taxonomy, which also forms the basis for a pillar of comprehensive indexing in order to allocate an overall search function. A complex portal solution is available and KM functions and applications, as well as the corresponding content are deeply integrated. The integration of portal applications mainly refers to an integrated search for content, competences and collaboration rooms, as well as the aligned use of search engines.
253
The Relevance of Integration for Knowledge Management Success
2.
3.
254
The presentation level of KMS is realized by the use of the graphical user interface of the company’s portal, whose uniform configuration ensures an integrated working environment for its users. The cultural and personal success factors refer to the motivation and commitment of users, as well as a company’s predominant organizational culture. These factors were achieved due to horizontal integration on the process level, cultural integration, as well as the integration of KM processes and roles in the KMS. In respect of motivation and commitment, it is noteworthy that all employees participate in KM activities voluntarily, but show a high intrinsic motivation. The company’s knowledge competencies and KM experts are also highly motivated and engaged. They regularly present themselves as “role models” in diverse communities. In this context, the enormous freedom of scope for creation, collaboration, documentation, and reflection needs to be mentioned. All of these points can be ascribed to the processes and roles’ successful integration with the KMS and company’s culture. Various training and further education measures (physical training, as well as audio and web cast sessions) were introduced in the beginning of the post-merger integration phase to support these developments. Another positive aspect was the active usage of content management functions and competence directories by authors, competencies, and searching employees, as well as the satisfactory assessment of the feedback opportunities between these employees. This was ensured through a satisfactory horizontal integration on the process level. The strategic success factors focus on the overall knowledge strategy, the knowledge structure in an organization, and the aligned articulation of KM goals. The achievement of these success factors failed due to the
lack of vertical integration across the three layers (strategy, process and system layer) of the architecture. The KM topic was not tightly integrated into the overall change management process of the post-merger integration activities. No comprehensive information policy, concrete authorization system, or employee incentive system have been introduced. There was also no clear and definite objective for the KM areas’ content, competence and community management. The KM goals are not consistent with the overall organizational goals and the understanding of the KM and KMS’ meaning, aims, and objectives needs to be communicated more clearly throughout the company. Summarizing the investigation, it can be stated that the attainment or failure of the success factors depends on the degree of integration. The basic assumption has therefore been strengthened. The necessity for an integrated and holistic view is outlined by the case study – horizontal integration, cultural integration, and the integration of KM processes and KM roles into the KMS support the usage and frequency of use of the KMS, but do not ensure that the KMS is used in the most effective and efficient way. In order to control the usage of KMS to strengthen the organization’s performance and the achievement of strategic goals, a company’s overall strategy and goals need to be aligned to KM strategy and KM goals. In terms of vertical integration, KM strategy and goals need to be transparent and clearly communicated so that all employees “act in concert and walk into the same direction.” Hence, the described KM initiative can be evaluated as unsuccessful. The technical and cultural conditions required for success were established, but the strategic aspects were largely disregarded.
The Relevance of Integration for Knowledge Management Success
concluSion, liMitationS, and furtHer reSearcH The overall goal of this paper – to analyze and investigate the coherences, connections, and interdependencies between KM success and an integrated and holistic perspective on KM – has been achieved. The CSFs of Jennex/Olfman’s model were identified as widely accepted factors as they are, firstly, based on the cognitions of accredited and valued KM publications and studies referring to a total of 78 KM initiatives. Secondly, they can be applied to all elemental KM success models. The 78 KPIs of Riempp’s model focus on different dimensions of integration and evaluate successful KM in terms of integration success. In summary, the literature review and comparison of CSFs and KPIs show that it is feasible to focus on achieving Riempp’s KPIs, hence concentrating on integration. In the end, this approach will lead to an achievement of the 12 Jennex/ Olfman CSFs and ensure KM initiatives’ success. The results support that in order to achieve KM success, understood as a multidimensional concept as defined by Jennex, Smolnik and Croasdell (section 3.1), all elements of the integrated KMS architecture need to be addressed in a structured and integrated approach. The case study supports these findings. The KMS are indeed used intensively by the employees. However, due to a lack of transparency regarding the KM strategy and goals and a lack of vertical integration, the KMS are not used in the most efficient way in terms of an improvement of the company’s performance. A broad consideration of all the integration dimensions is necessary to execute KM initiatives successfully. It would be tempting to conclude – and not “only” to assume – that integrated KM determines KM success. In order to do so, more real-life cases need to be conducted. This can be regarded as a limitation of the findings in this article, as well as an area for further research work.
Additionally, effort should be made to develop an “integrated KM success model.” The framework can either focus fully on Riempp’s architecture for integrated KMS or on selected aspects. In respect of the framework’s configuration, an absolutely new model could be developed, or an existing model – for example, the Jennex/Olfman success model – could be extended appropriately.
referenceS Alavi, M., & Leidner, D. E. (1999). Knowledge Management Systems: Issues, Challenges, and Benefits”, Communications of the AIS, 1. Alavi, M., & Leidner, D. E. (2001). Review: Knowledge Management and Knowledge Management Systems: Conceptual Foundations and Research Issues. Management Information Systems Quarterly, 25(1), 107–136. doi:10.2307/3250961 Anantatmula, V., & Kanungo, S. (2006). Structuring the Underlying Relations among the Knowledge Management Outcomes. Journal of Knowledge Management, 10(4), 25–42. doi:10.1108/13673270610679345 Argote, L., McEvily, B., & Reagans, R. (2003). Managing Knowledge in Organizations – An Integrative Framework and Review of Emerging Themes. Management Science, 49(4), 571–582. doi:10.1287/mnsc.49.4.571.14424 Bals, C., Smolnik, S., & Riempp, G. (2007). A Case for Integrated Knowledge Management. In Proceedings of the 4th Conference Professional Knowledge Management: Experiences and Visions. Berlin, Germany: GITO. Bots, P., & De Bruijn, H. (2002). Effective Knowledge Management in Professional Organizations. In Proceedings of the 35th Hawaii International Conference on System Sciences. IEEE Computer Society Press.
255
The Relevance of Integration for Knowledge Management Success
Chong, S., & Choi, Y. S. (2005). Critical Factors in the Successful Implementation of Knowledge Management. Journal of Knowledge Management Practice – In the Knowledge Garden, 6. Cooper, L. P. (2006). An Evolutionary Model for KM Success. In Proceedings of the 39th Hawaii International Conference on System Sciences. IEEE Computer Society Press. Damodaran, L., & Olphert, W. (2000). Barriers and Facilitators to the Use of Knowledge Management Systems. Behaviour & Information Technology, 19(6), 405–413. doi:10.1080/014492900750052660 Daniel, D. R. (1961). Management Information Crisis. Harvard Business Review, 39(5), 111–112. Davenport, T., De Long, D., & Beers, M. (1998). Successful Knowledge Management Projects. MIT Sloan Management Review, 39(2), 43–57. DeLone, W. H., & McLean, E. R. (1992). Information Systems Success: The Quest for the Dependent Variable. Information Systems Research, 3, 60–95. doi:10.1287/isre.3.1.60 Fahey, L., & Prusak, L. (1998). The Evelen Deadliest Sins of Knowledge Management. California Management Review, 40(3), 265–275. Gruber, M. (2000). Der Wandel von Erfolgsfaktoren Mittelständischer Unternehmen. Wiesbaden: DUV. Jennex, M. E. (2005). Knowledge Management Systems. International Journal of Knowledge Management, 1(2), 1–4. Jennex, M. E. (2007, November 9). Knowledge Management in Support of Education. First International Conference on Education Reform, Khon Kaen, Thailand.
256
Jennex, M. E., Croasdell, D., & Smolnik, S. (2008). Towards Measuring Knowledge Management Success. In Proceedings of the 41st Hawaii International Conference on System Sciences. IEEE Computer Society. Jennex, M. E., & Olfman, L. (2005). Assessing Knowledge Management Success. International Journal of Knowledge Management, 1(2), 33–49. Jennex, M. E., & Olfman, L. (2006). A Model of Knowledge Management Success. International Journal of Knowledge Management, 2(3), 51–68. Kankanhalli, A., Tan, B., & Kwok-Kee, W. (2005). Contributing Knowledge to Electronic Knowledge Repositories – An Empirical Investigation. Management Information Systems Quarterly, 29(1), 113–143. Lindsey, K. (2002). Measuring Knowledge Management Effectiveness: A Task Contingent Organizational Capabilities Perspective. Eight Americas Conference on Information Systems, (pp. 2085-2090). Maier, R. (2002). Knowledge Management Systems – Information and Communication Technologies for Knowledge Management. Berlin, Germany: Springer. Massey, A. P., Montoya-Weiss, M. M., & O’Driscoll, T. T. (2002). Knowledge Management in Pursuit of Performance: Insights from Nortel Networks. Management Information Systems Quarterly, 26(3), 269–289. doi:10.2307/4132333 McDermott, R. (1999). Why Information Technology Inspired but Cannot Deliver Knowledge Management. California Management Review, 41(4), 103–117. McKeen, J. D., Zack, M. H., & Singh, S. (2006). Knowledge Management and Organizational Performance: An Exploratory Survey. In Proceedings of the 39th Hawaii International Conference on System Sciences. IEEE Computer Society Press.
The Relevance of Integration for Knowledge Management Success
Nöcker, R. (1999). Erfolg von Unternehmen aus Betriebswirtschaflicher Sicht. Unternehmerisch erfolgreiches Handeln,(pp. 53-66). Riempp, G. (2004). Integrierte Wissensmanagement-Systeme – Architektur und Praktische Anwendung. Berlin, Germany: Springer. Rockert, J. (1979). Chief Executives Define Their Own Data Needs. Harvard Business Review, 57(2), 81–93. Schmalen, C., Kunter, M., & Weindlmaier, H. (2005). Theoretische Grundlagen, Methodische Vorgehensweise und Anwendungserfahrung in Projekten für die Ernährungsindustrie. In Proceedings der 45. Tagung für Gesellschafts- und Sozialwissenschaften des Landbaues Göttingen. Erfolgsfaktorenforschung. Smolnik, S. (2006). Wissensmanagement mit Topics Maps in Kollaborativen Umgebungen – Identifikation, Explikation und Visualisierung von Semantischen Netzwerken in Organisationalen Gedächtnissen. Berlin, Germany: Shaker.
KeY terMS and definitionS Holistic KM View: An holistic KM view integrates people, processes, and technology. The KMS is created by combining content, organizational processes, users and technical solutions to facilitate the capture, storage, retrieval, transfer, and reuse of knowledge to improve organizational and individual decision-making.
Integrated KMS Perspective: An integrated KMS is design to fit an infrastructure and generic KM perspective (KM in the large) as well as process or task based KM approach (KM in the small). An Architecture for Integrated KMS: An architecture for integrated KMS consists vertically of three layers (strategy, process, and system) and horizontally of four pillars (content, competence, collaboration, and orientation). All these elements are influenced by the organizational culture. An Integrated Measurement System: The integrated measurement system verifies the target achievement of (1) knowledge transparency creation, (2) knowledge exchange promotion, (3) knowledge development control, and (4) knowledge efficiency ensurance by taking quality improvements as well as time and cost reductions into consideration. Knowledge Management (System) Success: KM and KMS success are a multidimensional concept. Each includes capturing the right knowledge, getting the right knowledge to the right user, and using this knowledge to improve organizational and/or individual performance. It is measured using the dimensions of impact on business processes, impact on strategy, leadership, and knowledge content. Critical Success Factors: Critical success factors thus are the limited number of areas in which results, if they are satisfactory, will ensure successful competitive performance for the organization. They are the few key areas where things must go right for the business to flourish.
257
The Relevance of Integration for Knowledge Management Success
aPPendix Table 1. Employee survey Answer style
Process Layer
1) Have introdutory courses, measures for further education or other trainings, which addressed content management process flows, been performed within the last 6 months?
Yes/No
2) Briefly think of how long it usually takes in order to gather an information within the portal. Can you assess a performance increases compared to the initial situation prior to the merger?
Yes/No
3) Is your content management performance assessed on the basis of target agreements or savored by means of honors?
Yes/No Numberbased
4) Assess the feedback opportunities of users on active authors within the portal. 5) Have introdutory courses, measures for further education or other trainings, which addressed the handling of content management functions, been performed within the last 6 months? System Layer
Content Management
Question
6) Assess the possibilities of a comprehensive content creation in the content management area. Examples for comprehensive content creation are rich-text formatting, grouping and linking of information objects, etc. 7) Have you ever been motivated by other employees’ input in such a way, that yourself became an author?
Process Layer System Layer
Numberbased Yes/No Numberbased
8) Asses comfort and usability of the portal user interface.
Competence Management
Yes/No
9) Have introdutory courses, measures for further education or other trainings, which addressed competence management process flows, been performed within the last 6 months?
Yes/No
10) Assess the success of integrating the topic area knowledge management into the overall change management process activated by the merger
Numberbased
11) Have you been adverted to in how far the functionalities of a competence management system (e.g. expert functions) can deliver a surplus to your work? Assess the quality and degree of communication with regards to that topic accordant to your satisfaction.
Numberbased
12) Does a definite goal with regards to competence management exist and is this goal clearly communicated by means of appropriate media (e.g. newsletter)?
Yes/No
13) Does a guard against fear of “the glassy employee” exist by means of a definite authorization system and comprehensive information?
Numberbased
14) Have introdutory courses, measures for further education or other trainings, which addressed the handling of competence management functions, been performed within the last 6 months?
Yes/No
15) Assess the possibility of contacting and the establishment of contact to important competences accordant to your satisfaction.
Numberbased
16) Assess the usage of the competence directory within the portal. Self-criticaly assess the quality and quantitiy of your own entries as well as the frequency of use.
Numberbased
17) Assess the comfortability of navigation-, search- and analysis options as well as the effectiveness of visualization of competence management functions.
Numberbased
continued on following page
258
The Relevance of Integration for Knowledge Management Success
Table 1. continued Answer style
Question 18) Assess the application of information and communication systems in support of existing collaboration and community functions (e.g. Communities of Practice, virtual team rooms, etc.)
Process Layer System Layer Process Layer System Layer
Management of orientation
Collaboration Management
19) Do definite and clearly communicated goals for Communities and virtual team rooms exist?
Numberbased Yes/No
20) Assess the proportion of community members and members of virtual team rooms who are experts accordant to your satisfaction.
Numberbased
21) Assess the IT-solution which is applied for realization and the support of communities and virtual team rooms.
Numberbased
22) Assess the engagement and the collaboration frequency of role models (e.g. moderator, team officer, project officer, etc.) in communities and virtual team rooms.
Numberbased
23) Assess the intrinsic motivation of group memebers with regards to the advancement of a community or virtual team room.
Numberbased
24) Have introdutory courses, measures for further education or other trainings, which addressed the handling of collaboration management functions, been performed within the last 6 months?
Yes/No
25) Is your effort in communities or virtual team rooms pushed through the motivating behaviour of supervisors?
Yes/No
26) Assess the integration of community functions and virtual team room environments into the daily work environment accordant to your satisfaction. 27) Are you always online while your PC is on power?
Numberbased Yes/No
28) Assess convenience and comfortability of the operability of collaboration functions.
Numberbased
29) Assess the integration of orientation functions (search and retrieval, navigation, etc.) into existing information systems.
Numberbased
30) Assess the adequacy of the compromise between convenience articulateness of navigation and search versus depth and breadth of orientation functions.
Numberbased
31) Assess the illustration of the established language use in combination with adequate terminological percision accordant to your satisfaction.
Numberbased
32) Assess the patency of the application of taxonomies in classfication, navigation and building of search indices.
Numberbased
33) Assess the appropriateness of push- and pull-personalization in knowledge management systems.
Numberbased
34) Assess the composition of layout and navigation with accordance to patency and clearness.
Numberbased
35) Assess the quality of conditioning of search results according to your satisfaction.
Numberbased
36) Assess the speed and accuracy of search functions accordant to your satisfaction.
Numberbased
259
The Relevance of Integration for Knowledge Management Success
Table 2. Guidelines of structured interview Question
Answer style
1) Convenient integration of information objects in task execution.
Given?
Yes/No
2) Integrability with other applications by standardized interfaces.
Given?
Yes/No
3) Comfortable creation and revision of information objects in the daily work environment using familiar tools (e.g. WYSIWYG)
Given?
Yes/No
4) Disassocation of content, structure, presentation, and application logic.
Given?
Yes/No
5) Information objects are ideally stored in an integrated, database-based information memory, which is applicable across all platforms.
Given?
Yes/No
6) Rendering for all possible file formats for various clients.
Given?
Yes/No
7) Creation and preservation of maintenance processes.
Given?
Yes/No
8) Back-end integration with existing human resources management systems in order to avoid inconsistencies.
Given?
Yes/No
9) Front-end integration with systems of the daily work environment.
Given?
Yes/No
10) High network capacity
Given?
Yes/No
11) Automatic adoption of available functions to connected hardware and different network capacities.
Given?
Yes/No
Discussed Critical Success Factor Process Layer
Content Management
Competence Management Collaboration Management
System Layer
Process Layer System Layer
System Layer
Process Layer
Management of orientation
260
System Layer
12) An integrated user-interface.
Given?
Yes/No
13) Glossary and taxonomy are closely restricted to central terms.
Given?
Yes/No
14) Periodically passing maintenance processes for adopting the dynamical development of language use.
Given?
Yes/No
15) Integration of standardized terminologies.
Given?
Yes/No
16) Continuous examination and actualization of layout and navigation.
Given?
Yes/No
17) Adequate usage of search engines for dynamically generated navigation structures, topic maps, taxonomy extracts, and taxonomy maintenance.
Given?
Yes/No
18) Automatic link control for the correction of broken links.
Given?
Yes/No
19) Navigation and search consider multiple languages.
Given?
Yes/No
20) Integrated search for content, competences, and collaboration rooms.
Given?
Yes/No
21) Centralization of different search indices in order to perform comprehensive search processes.
Given?
Yes/No
22) Search engines search in connected sources.
Given?
Yes/No
23) Comfortable classification of information objects (as a basis for an attribute-based indecing and searching).
Given?
Yes/No
24) Singular authentication of all integrated applications.
Given?
Yes/No
The Relevance of Integration for Knowledge Management Success
Table 3. Guidelines of semi-structured interview Topic
Question a) Do you consider the KM initiative as successful? Please take the following four factors as a standard for your appraisal:
1) Overall KM/KMS success
- Increase of project resources (including human resources and financial resources) - Increased amount of knowledge and knowledge use - The survivability of the project is not dependent on few core competences - A coherence to the financial success of the company The employees who participated at the employee survey quote that they have been trained with regards to process flows and the handling of knowledge management system functions. a) In how far have the introduced measures been supported by means of horizontal integration? The employees who participated at the employee survey assess the key performance indicators below as being achieved:
2) Horizontal integration
- Agile usage of authors so that searching employees can find content easily and get motivated to become authors themselves. - Active usage of the competence registry - Feedback opportunities between users and authors - Free space for formation, collaboration, documentation, and reflection b) Give examples in how far the introduced measured have been supported by means of horizontal integration. Also agree to cultural aspects. The employees who participated at the employee survey assess the key performance indicators below as being achieved:
3) Integration of KM processes and roles in the KMS
- Active usage of role models and competences - Active engagement of popular experts acting as role models and precursors a) How have experts and competences been motivated in order to achieve this state? Give examples. b) Has this state rather been achieved due to an orientation the companies’ structure and processes or the existence of a KM beneficial culture? The employees who participated at the employee survey assess the following key performance indicators as not being achieved: - Cleary communicated and transparent content and competence management goals. - Clearly communicated community goals which are consistent with organizational goals. - Comprehensive information and a definite authorization system
4) Vertical integration
- Integration of the topic area KM into the overall change management process. - Creation and preservation of incentives (e.g. awards) a) Does a definite KM strategy and KM goal exist? b) Are KM processes geared towards KM strategy and KM goals? c) Assess the achievement degree of vertical integration. Do you consider vertical integration activities as being completed? d) Do you believe that the above key performance indicators have not been achieved to due to a lapse of a critical integration degree?
5) Decisiveness of integration 6) Measurement system
a) Do you consider an overall and holistic view on KM (i.e. a consideration of all central dimensions of integration) as being material to KM/KMS success? a) How do you measure the usage of knowledge management systems as well as the benefit of knowledge use? b) How do you ensure that the correct knowledge (i.e. current, helpful, relevant, and reliable knowledge) is collected?
261
262
Chapter 14
Strategies for Successful Implementation of KM in a University Setting Vittal S. Anantatmula Western Carolina University, USA Shivraj Kanungo George Washington University, USA
abStract Research has identified enabling factors and inhibitors for implementing knowledge management successfully and to accomplish its strategic objectives. However, it is important to understand how these factors interact with each other to improve or inhibit the performance. With this in mind, this chapter presents a model, based on a research study, to determine underlying relations among these factors and develop strategies implementing KM initiatives.
introduction Knowledge accumulated over centuries is often manifested in the form of ethics, culture, as well as technological, social, and economic developments of a society. At the organizational level, its growth in terms of wealth, collaborative working culture, business processes, and productivity are true reflections of its accumulated knowledge. In short, knowledge is linked to progress in practically every aspect of our lives. In the current economy, advances in information technology and communication systems have encouraged, and in some instances, compelled organizations
to develop and institutionalize process for the creation, transfer, and management of knowledge. It is, therefore, not surprising that research recognizes knowledge as a key economic resource. In particular, knowledge creation and subsequent sharing of this new knowledge and innovation are critical for organizations to gain and retain competitive advantage. Clearly, knowledge is considered a critical resource for sustaining competitive advantage.
what is Knowledge? Before we define knowledge, it is important to understand the most commonly used terms - data,
DOI: 10.4018/978-1-60566-709-6.ch014
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
Strategies for Successful Implementation of KM in a University Setting
information, and knowledge - which are often used while referring to knowledge. The term “data” is used to refer to facts. Further, data consist of unprocessed facts (Edwards & Kidd, 2003). Unprocessed facts are organized to generate information; when used by someone to solve a problem, information in turn becomes personal knowledge (Ellis 2003). Data can be transformed into information by organizing and/ or processing them to derive meaningful and logical conclusions. Therefore, deriving information from data is associated with a purpose (Edwards & Kidd, 2003). Knowledge is derived from thinking, and it is a combination of information, experience and insight. Insight, in particular, is developed with the use of tacit knowledge. Deriving knowledge from information also requires human judgment, and is based on context and personal experience. It is, therefore, logical for information to be considered a subset of knowledge as it denotes understanding of the data. However, this understanding is personal, since one can view unique but different patterns in the information, which become the personal knowledge. When we document such personal knowledge in some form or other, it becomes explicit knowledge, an intellectual asset that can be shared among people with organization. Additionally, as we move from data to information and then to knowledge, the context and meaning tend to increase. Obviously, data, information, and knowledge fall into a transformation of hierarchy or progression (Edwards & Kidd, 2003). We can make a distinction between information and knowledge using the basis of how they are created. Information is created by deduction whereas knowledge is created by induction (Fernandes 2000). Further, uncertainty grows as we progress from data to information to knowledge (Berztiss, 2001). Berztiss observed that there is no
uncertainty in data, some uncertainty in information, and even more uncertainty in knowledge. As a resource, knowledge increases its value with the use. Ironically, knowledge tends to remain dormant, and not very useful, until it is reflected in action (Rad & Anantatmula 2005). Therefore, managing knowledge in organizations is a challenge not only because it is hard to identify but also because it is even more difficult to value and deploy relevant knowledge to gain a competitive advantage in the market place (Dutta, 1997).
Knowledge Management Though not a new concept, knowledge management (KM) has gained prominence due to advances in information technology and its extensive use in organizations. Obviously, KM is often perceived as information management by many organizations; it is often associated with technological solutions such as intranets and databases (Marr, 2003). Early research on KM, however, suggested that importance of technological factors is far less compared to people and organizational factors (Davenport & Prusak, 1998). People are instrumental in creating knowledge as it is derived from thinking. Furthermore, a majority of personal or organizational knowledge remains tacit. It is imperative to understand that KM is a broader concept than simply the use of technology and tools. The primary focus of KM is to utilize information technology and tools, business processes, best practices, and the organizational culture to develop and share knowledge within an organization so as to connect those who possess knowledge to those who need the knowledge (Anantatmula, 2005). Ultimately, the purpose of KM is to leverage the knowledge for productive purposes. It is in this process that IT plays a supporting role for effective KM implementation. Research has shown that the nature of causes and effects in the context of evaluating the IT ef-
263
Strategies for Successful Implementation of KM in a University Setting
fectiveness are separated in time (Soh and Markus, 1995). In KM, which has far broader scope than IT, the gap between investment in KM and its effectiveness is likely to be exaggerated because of the difficulty associated with measuring its effectiveness (Anantatmula, 2005). Furthermore, just as the relationship between cause and effect in IT is complex and indirect, the relationship between the enablers of KM and their outcomes are even more intractable and complex. As a direct consequence of the complexity associated with KM (McElroy, 2000), many organizations that have employed KM initiatives, remain unclear about the extent to which they have been successful in experiencing the anticipated outcomes, and why. Several organizations implementing KM rely primarily on IT tools (Greenhalgh, Robert, MacFarlane, Bate, & Kyriakidou, 2004) and, as a result, may not achieve desired results. The purpose of this chapter was to identify enablers of KM using literature review and to use these enablers to develop a model for successful implementation of KM initiatives. This model allows us to address the complexity associated with KM initiatives by identifying barriers and enablers to KM that are unique to any organization. Additionally, this chapter demonstrates how widely varying mental models of the causal chain of influencers of KM effectiveness can be integrated to provide a clearer understanding in a complex organizational context. In the following section, we review past studies to identify enablers of KM implementation. Next, using these factors, we develop a research methodology using Interpretive Structural Modeling (ISM) to determine the underlying relationships among the factors. For this purpose, we used surveys and interviews of KM professionals. We obtain results that, when understood from the final integrated perspective, resolve many counterintuitive findings from the analysis. Based on these results, we suggest strategies and recommendations for the successful implementation of KM.
264
literature reView Past studies have shown that it is difficult to assess return on investment of knowledge management systems (KMS). Contending that the impact of KMS on the organization depends on the evolutionary stage of the KMS, Cooper (2006) suggests at the system level, completion is considered success. Likewise, effectiveness/efficiency of tasks, cost savings through process improvements and competitive advantage are considered indicators of success at task, process, and organizational levels respectively (Cooper, 2006). However, these results can be obtained only through successful KM implementation. The complexity associated with KM is not only due to the multiplicity of the enablers of KM but also due to the intertwined nature of how these enablers interact amongst each other. The following discussion helps highlight the proliferation of factors that have been identified as important determinants of KM success.
KM Success factors Technology and IT are used interchangeably in this section based on as and how they are referred to in different research studies. Culture and technology are commonly found in several studies as enablers of KM since organization culture plays a pivotal role in knowledge creation, sharing, collaboration, and leverage whereas technology facilitates easy and effective knowledge transfer. Elliott and O’Dell (1999) considered culture, technology, infrastructure, and measurement as four key enablers of KM and maintained that each is essential and they work together to yield sustainable success of KM. Culture promotes collaboration and sharing of knowledge; technology speeds up the knowledge transfer but creates information overload; infrastructure includes organization structure, technology, processes, and people networks to ensure knowledge flow; and measurement should
Strategies for Successful Implementation of KM in a University Setting
focus on the impact of knowledge on organization performance (Elliott & O’Dell 1999). Leadership plays an important role in ensuring the effectiveness and combined effort of the four enablers identified by Elliott and O’Dell and research has shown that leadership along with technology, culture, and measurement are considered the enablers of KM (Ward & Aurum 2004). Measurement, as a key factor for measuring and promoting success, has appeared in other research studies as well. Measures associated with a KMS can be used as one of the means to understand how it should be developed and implemented (Jennex & Olfman 2004). KM success factors can be viewed as facilitating factors for a KM initiative and some success factors include leadership, investing in people, and developing supporting organizational conditions like technical infrastructure and secured knowledge structure (Chourides, Longbottom & Murphy 2003; Jennex & Olfman 2004). Culture and technology are found in several other research studies as well. Contending that KM success is driven by KM infrastructure and process capabilities, a research study by Gold, Malhotra and Segars (2001) proposed that technology, structure, and culture drive the infrastructure capability. This research study involving around 300 senior executives identified that an information sharing culture is critical for effective KM. A research effort aimed to explore the relation between KM drivers and organizational KM performance (Yu, Kim & Kim 2004), based on 66 Korean firms, found that KM drivers such as learning orientation, knowledge sharing intention, knowledge management system quality, reward, and knowledge management team activity were significantly related to the organizational knowledge management performance—knowledge quality and user knowledge satisfaction. Yu et al.’s study identified three main enabler dimensions and nine enablers. They are: As IT enables the acquisition of greater amounts of information thereby providing a greater amount
Table 1. KM dimensions and enablers (Yu et al., 2004) KM dimension
KM enabler
Organizational characteristics
- learning orientation - communication - knowledge sharing - flexibility
IT
- KMS quality - KMS functionality
Managerial support
- top management support - KM reward - KM team activity
of data related to organizational processes (Alavi & Leidner 2001), it provides opportunities for creating and expanding knowledge. However, most of the IT tools of KM are developed for explicit knowledge (Koh, Ryan & Prybutok 2005) and identified three critical enablers. • • •
Strategic alignment and focus System and data integration Security and privacy policies
Another research study (Hariharan 2005) - acknowledging that KM would help share knowledge and eliminate reinvention - proposed seven enablers of KM. They are: • • • • • •
Strategic focus Alignment with objectives KM organization and roles Standard KM processes Culture and people engagement Content under scrutiny
Between tacit knowledge and explicit knowledge, the former represents a lion’s share of total knowledge. Based on the contention that much of the tacit knowledge - a greater component of organizational knowledge - is found in social interactions, and different social contexts facilitate different modes of knowledge integration, Lang (2004) suggested that social capital and social
265
Strategies for Successful Implementation of KM in a University Setting
context are enablers of knowledge integration, which is influenced by the characteristics of knowledge involved and the characteristics of social context in which they occur. We must keep in mind that social context is influenced by organization culture. Lee and Choi (2003) identified seven enablers namely, collaboration, trust, learning, centralization, formalization, T-shaped skills, and IT and support. Of these, trust is part of an organization’s culture and is translated into activities such as increased collaboration and communication. Trust is considered a significant factor and in the absence of trust, knowledge sharing will not take place and organizations refrain from sharing critical information across the enterprise (Robbins 2005). Thus, trust fits into the roles of inhibitor and enabler. In the current global economy, outsourcing is a common practice to acquire quality services and expertise at a lower cost. Consequently, virtual project teams are integral to many projects in the current economy. Knowledge transfer in virtual teams for system development will have different dynamic environment than the conventional one for communications. Arguing that virtual teams may need highly skilled individuals, Sarker, Sarker, Nicholson and Joshi (2005) using a research study, found that knowledge transfer in virtual teams is influenced by participating individual’s extensive participation in conversations (communication), being perceived as credible using trustworthy behavior (credibility), and having collectivist value (culture).
KM enablers – Summary of literature review Based on the literature reviewed thus far, we summarize the following KM enablers, which are listed as KM factors in Table 2 along with sources of reference. The literature review has helped us to develop a list of the main factors that past research has
266
identified as influencing KM success. However, an attempt to formally classify or organize those factors is not made intentionally. The rationale for this approach is that while past research helped to identify KM factors, understanding how these factors interact and influence each other remains the critical issue in developing strategies for successful KM implementation. Past research has not addressed this concern and thus, we are motivated to address this issue in the next section using an appropriate research methodology.
reSearcH MetHodologY To accomplish our research goal of understanding interactions and influences among the enablers of KM to develop successful KM strategies, Interpretive Structural Modeling (ISM) developed by Warfield (1973) is employed. In general, ISM involves structuring of goals and objectives into a hierarchical framework. However, we adopted this method to develop an understanding of the shared underlying mental model in which these factors (Table 2) operate. ISM is considered the appropriate research method because human brains have limits in coping with complex problems associated with significant number of elements and relations among elements (Waller, 1975); also, ISM uses interactive discussion method to collect data, which forces the participant in the research study to carefully analyze links between these factors. ISM is a process that helps groups of people in structuring their collective knowledge and modeling interrelationships in a way to enhance the ability of understanding complexity. In other words, it helps to identify structure within a system of related elements and provides opportunity to analyze it from different perspectives. Figure 1 was presented to the respondents and they were asked to fill out the white cells of the matrix shown in the figure with the following instructions:
Strategies for Successful Implementation of KM in a University Setting
Table 2. Summary of literature review KM Enabler
Reference
Strategic focus Leadership Top management support
Measurement of results
Elliott & O’Dell (1999),Okunoye & Karsten (2002),Ward & Aurum (2004),Hariharan (2005),Koh, Ryan, & Prybutok, (2005)
Top management involvement Content quality
Elliott & O’Dell (1999),Alavi & Leidner (2001),Okunoye & Karsten (2002),Ward & Aurum (2004),Yu, Kim, & Kim (2004),Jennex & Olfman (2004),Hariharan (2005),Koh, Ryan, & Prybutok, (2005) Elliott & O’Dell (1999),Hariharan (2005) Lee & Choi (2003),Yu, Kim, & Kim (2004),Hariharan (2005),Koh, Ryan, & Prybutok, (2005) Yu, Kim, & Kim (2004),Hariharan (2005)
Collaboration
Elliott & O’Dell (1999),Gold, Malhotra, & Segars (2001),Yu, Kim, & Kim (2004),Lang (2004),Robbins (2005)
Formalization
Lee & Choi (2003),Koh, Ryan, & Prybutok, (2005)
Communication Budgetary support
• •
Lee & Choi (2003),Yu, Kim, & Kim (2004),Hariharan (2005),Koh, Ryan, & Prybutok, (2005) Elliott & O’Dell (1999),Gold, Malhotra, & Segars (2001),Okunoye & Karsten (2002),Lee & Choi (2003),Lang (2004),Ward & Aurum (2004),Hariharan (2005), Sarker,Sarker, Nicholson, & Joshi (2005),Edwards & Kidd (2003)
Standard KM processes
•
Okunoye & Karsten (2002),Ward & Aurum (2004),Jennex & Olfman (2004),Koh, Ryan, & Prybutok, (2005)
Culture
Technology infrastructure
•
Okunoye & Karsten (2002),Hariharan (2005),Koh, Ryan, & Prybutok, (2005),Edwards & Kidd (2003)
Yu, Kim, & Kim (2004), Sarker,Sarker, Nicholson, & Joshi (2005) Yu, Kim, & Kim (2004),Koh, Ryan, & Prybutok, (2005)
Enter 1 when the row influences the column Enter 2 when the column influences the row Enter 3 when there is no relation Enter 4 when row and column influence each other
For example, the cell (1, 2) represents the question, “Does strategic focus lead to KM leadership or vice-versa?” and the response (1, 2, 3 or 4) is entered in the cell (1, 2). The contextual relation is established based on a pair-wise assessment of all the thirteen factors as shown in Figure 1 and majority of the respondents agreeing to a specific relation between any two elements. With the use of this methodology, one can (a) identify the direct and indirect relationships between attributes of project performance and (b) show how to include softer variables in the analysis. We have interviewed using the survey instrument shown in Figure 1 to collect the data from
a selected group of faculty and staff from two academic institutions. Participants in the study were actively involved in university-initiated KM efforts and KM research. Participants were academicians and administrators of information systems division at these universities. Detailed ISM methodology to develop the directional graph is explained in Appendix A. The contextual relation is established based on a pair-wise assessment of all the seven factors as shown in Figure 1, and the majority (75%) of the respondents agreeing to a specific relation between any two elements. With the use of this methodology, one can (a) identify the direct and indirect relationships between attributes of project performance and (b) show how to include softer variables in the analysis.
267
Strategies for Successful Implementation of KM in a University Setting
Figure 1. ISM for data collection
reSultS and diScuSSion Using the software, the values of 1, 2, 3 and 4 are translated into binary values to develop directional graph, shown in Figure 2. Actual computational results were shown in Appendix B. Results obtained using ISM represent the mental models of those who participated in the study. From that perspective, these results are subject to interpretation, hence the name interpretive structural modeling. It can be seen that each of these relations (arrows in the diagram) are tenable. While the contextual development of this structure in terms of relevance to an academic environment is important, configuration of these
268
elements and the resultant model might be different for business organizations. Nevertheless, generic insights are relevant.
generic insights Results shown in Figure 2 suggest that top management involvement, KM leadership, and the culture of the organization are the factors that serve as driving forces to build a successful KM effort. With the top management involvement, KM initiatives will gain support and active participation of the senior executives of the organization and a greater commitment from rest of the organization. Top management involvement would also ensure
Strategies for Successful Implementation of KM in a University Setting
Figure 2. Model for KM enablers
strategic focus of KM initiatives, which ultimately leads to desired results. Citing research studies, we initially argued that the nature of cause and effect in IT is separated in time and it is more exaggerated in the context of KM. Our research findings shown in figure 2 identify various levels in KM to reach intermediate targets such as content quality, collaboration, and strategic focus. These results make it obvious that the separation between cause and effect in KM systems is more pronounced. For instance, our results show that collaboration can be accomplished after standard KM processes, technology infrastructure, and communication systems are in place. Establishing and using standard processes, technology infrastructure, and communication systems is gradual and time-consuming. Furthermore, the resultant collaboration and content quality will gradually transition into knowledge transfer among people. Needless to say, actual impact on business performance, which is of interest to organizations, is a far-reaching goal.
Results also demonstrate the importance of developing other supporting factors such as leadership and budget before developing technology infrastructure for KM. However, in reality, organizations make use of existing IT infrastructure for KM implementation without fine-tuning part of it to serve the purpose of KM.
inductive approach From an organizational standpoint, importance of these results lies in the emergence of the logical flow of causal influences. This flow is not only logically consistent but is also a view that is shared by the authors of this chapter. The contextual relevance of this approach has significant implications for practice and in this case, it is an educational institution setting. Our results show that two factors - the competent leadership of KM initiative combined with the support from the top management – should be present that would lead to budgetary support for
269
Strategies for Successful Implementation of KM in a University Setting
KM initiatives. Budgetary support would assist in developing technology infrastructure for sharing, and archiving knowledge. Figure 2 also shows that top management support would also lead managers involved in KM initiative to formalize KM-related functions and consequently, develop standard processes. Since, resource integration, efficient and effective use of resource utilization, implementation of plans to bring stability - important tenets of management - help manage complexity associated with these processes, standardization of these processes is aimed at improving efficiency and effectiveness. The next logical step would to measure results of these processes to determine the success of KM initiatives. Results show and it makes logical sense that standard process promote quality of the content that is available for knowledge transfer. Organization culture that encourages open and transparent communication among the employees of the organization would lead to increased collaboration and knowledge sharing at hierarchical levels of the organization, which leads to knowledge sharing. Increased communication with the aid of standard processes, and technology infrastructure make it easy and enhance collaboration.
givens, Means, and ends Figure 2 can also be interpreted in terms of givens, means and goals in a KM effort. The elements at the bottom of the Figure can be considered as the set of givens. These “givens,” from a management standpoint can be considered to be aspects that are there or not there. It is generally difficult to cultivate them in a medium or short term. In our model, KM leadership, top management support and top management involvement are considered a set of givens. “Ends” tend to be the elements at the top of the model. Collaboration, content quality, measurement of results and strategic focus are the ends in the KM effort in the context of a university. The means are the elements that can be controlled, manipulated or developed to form
270
the link between the “givens” and the “ends.” Communication, technology infrastructure, standardized processes, culture, budgetary support and formalization of the KM effort are all aspects that can be changed, increased or decreased in order to accomplish the ends. From the standpoint of enablers and barriers, this approach allows us to understand how each of these elements can behave as an enabler as well as an inhibitor to the KM effort. For instance, in Figure 2, the weakness of an element makes it an inhibitor while the strength of that very same element makes it an enabler. As a case in point, strong and effective KM leadership leads to budgetary support and formalization of the KM effort. However, weaknesses in KM leadership will dilute the support for budgetary support and the formalization of KM processes. This approach goes to show the dual nature of elements in terms of whether they are enablers or inhibitors in the KM effort. It also goes to show that it may not be useful to normatively classify elements as facilitators or inhibitors. These results have several implications. In order to build a successful KM initiative, universities need to secure top management involvement first. Next, the selection of a competent and committed leader is important for the initiative because the leader plays a critical role in securing funds and building technology infrastructure to accomplish KM goals and objectives. Universities must recognize that developing a culture that promotes communication and trust among the employees would facilitate accomplishing KM goals such as collaboration and knowledge sharing among employees. However, developing and nurturing a culture of openness and trust is usually a gradual process. Once a KM system is implemented, it is imperative that the system should maintain the strategic focus, and quality of the content for meaningful collaboration among the employees. Finally, instead of trying to evaluate knowledge directly, which may not be easy, we recommend
Strategies for Successful Implementation of KM in a University Setting
assessing its contribution to business performance and processes.
future direction As participants represented academic institutions, results are not easily generalizable across all types of organizations as purpose of KM investments is dependent on the type of organization. For instance, KM investments in academic institutions are likely to focus added value to academic research, teaching, and effective administration whereas in a for-profit commercial organization, KM investments are assessed from the standpoint of increasing revenue; return on investment is likely to assume greater importance. Consequently, these results should be viewed from that perspective and we must understand that these discussions and conclusions would be appropriate to universities. Due to the limited number of participants in the research effort, future efforts should involve more stakeholders from different types of organization to improve the validity of these results. Such an approach would provide a robust shared mental model that would generally applicable. Further, in order to add more value, we intend to incorporate the strength of the relationships between elements by allowing for user to provide a weight for each relationship.
concluSion Our approach to understand how various enablers can act as either enabler or barrier to the KM effort using ISM based on whether they are present or absent in an organization. Further, we have shown that a qualitative approach not only allows us to retain the richness of the complexity associated with the interactions among elements, but also allowed us to identify elements that can act as the givens, means and goals in the KM effort.
With the research results, we have identified the importance of strategies and suggested avenues for successful KM implementation.
referenceS Alavi, M., & Leidner, D. E. (2001). Knowledge management and knowledge management systems: Conceptual foundations and research issues. Management Information Systems Quarterly, 25(1), 107–136. doi:10.2307/3250961 Anantatmula, V. (2005, April-June). Outcomes of Knowledge Management Initiatives. International Journal of Knowledge Management, 1(2), 50–67. Anantatmula, V., & Kanungo, S. (2006). Structuring the Underlying Relations among the Knowledge Management Outcomes. Journal of Knowledge Management, 10(4). doi:10.1108/13673270610679345 Bertzsis, A. T. (2001). Dimensions of the Knowledge Management Process. IEEE Computer Society, 1529-4188/01, 432-441. Bro, R. F. (1974). Interpretive Structural Modeling as Technology for Social Learning. Conference on Decision and Control, November 20-22. Phoenix, AZ: IEEE. Chourides, P., Longbottom, D., & Murphy, W. (2003). Excellence in Knowledge Management: An Empirical Study to Identify Critical Factors and Performance Measures. Measuring Business Excellence, 7(2), 29–45. doi:10.1108/13683040310477977 Conradi, R., & Dyba, T. (2001). An Empirical Study on the Utility of Formal Routines to Transfer Knowledge and Experience. ESEC/FSE. Vienna: ACM. Cooper, L. (2006). An Evalutionary Model for KMS Success. HICSS39. IEEE Computer Society.
271
Strategies for Successful Implementation of KM in a University Setting
Davenport, T. H., & Prusak, L. (1998). Working Knowledge. Boston, MA: Harvard Business School Press. Dutta, S. (1997). Strategies for Implementing Knowledge-based Systems. IEEE Transactions on Engineering Management, 44(1), 79–90. doi:10.1109/17.552810 Edwards, J. S., & Kidd, J. B. (2003, February). Knowledge Management Sans Frontiers. The Journal of Operational Research Society. Special Issue on Knowledge Management and Intellectual Capital, 54(2), 130–139. Elliott, S., & O’Dell, C. (1999). Sharing knowledge & best practices: The hows and whys of tapping your organization’s hidden reservoirs of knowledge. Health Forum Journal, 42(3), 34–37. Fernandes, A. A. (2000). Combining inductive and deductive inference in knowledge management tasks. In Eleventh International Workshop on Database and Expert Systems Applications, pp. 1109-1 114, IEEE Computer Society Press, 2000. Gold, A. H., Malhotra, A., & Segars, A. H. (2001). Knowledge Management: An Organizational Capabilities Perspective. Journal of Management Information Systems, 18(1), 185–214. Greenhalgh, T., Robert, G., MacFarlane, F., Bate, P., & Kyriakidou, O. (2004). Diffusion of innovations in service organizations: Systematic review and recommendations. The Milbank Quarterly, 82(4), 581–629. doi:10.1111/j.0887378X.2004.00325.x Hariharan, A. (2005). Implementing seven KM enablers at Bharti. Knowledge Management Review, 8(3), 8–9. Hart, W. L., & Malone, D. (1974). Goal Setting for a State Environmental Agency. Conference on Decision and Control, November 20-22. Phoenix, AZ: IEEE.
272
Jennex, M. E., & Olfman, L. (2004). Accessing Knowledge Management Success/Effectiveness Models. In Proceedings of the 37th Hawaii International Conference on System Sciences. HICSS37. IEEE Computer Society. Kanungo, S., & Bhatnagar, V. (2001). Beyond generic models for information system quality: The use of interpretive structural modeling. Systems Research and Behavioral Sciences. Koh, E. C., Ryan, S., & Prybutok, V. R. (2005). Creating Value through Managing Knowledge in an E-Government to Constituency (G2C) Environment. Journal of Computer Information Systems, 45(4), 32–41. Lang, J. C. (2004). Social context and social capital as enablers of knowledge integration. Journal of Knowledge Management, 8(3), 89–105. doi:10.1108/13673270410541060 Lee, H., & Choi, B. (2003). Knowledge management enablers, processes, and organizational performance: An integrative view and empirical examination. Journal of Management Information Systems, 20(1), 179. Liao, C., & Chuang, S. (2006). Exploring the Role of Knowledge Management for Enhancing Firm’s Innovation and Performance. HICSS39. IEEE Computer Society. Marr, B. (2003). Known quantities. Financial Management Journal, 26-27. McElroy, M. W. (2000). Integrating complexity theory, knowledge management and organizational learning. Journal of Knowledge Management, 4(3), 195–203. doi:10.1108/13673270010377652 McKeen, J. D., Zack, M. H., & Singh, S. (2006). Knowledge Management and Organizational Performance: An Exploratory Survey. HICSS39. IEEE Computer Society.
Strategies for Successful Implementation of KM in a University Setting
Okunoye, A., & Karsten, H. (2002). ITI as Enabler of Knowledge Management: Empirical Perspective from Research Organisations in subSaharan Africa. HICSS32. IEEE Computer Society. Robbins, S. (2005). We need a new vocabulary. Information Systems Management, 22(1), 89–90. do i:10.1201/1078/44912.22.1.20051201/85744.12 Sarker, S., Nicholson, D. B., & Joshi, K. D. (2005). Knowledge Transfer in Virtual System Development Teams. IEEE Transactions on Professional Communication, 48(2), 201–218. doi:10.1109/ TPC.2005.849650 Soh, C., & Markus, M. L. (1995). How IT creates business value: A process theory synthesis. In Proceedings of the Sixteenth International Conference on Information Systems, (pp. 29-41).
Waller, R. J. (1975). Application of Interpretive Structural Modeling to Priority-Setting in Urban Systems Management. Portraits of Complexity (Battelle Monograph No. 9) (Baldwin, M., Ed.). Columbus, OH: Battelle Memorial Institute. Ward, J., & Aurum, A. (2004). Knowledge Management in Software Engineering – Describing the Process. ASWEC 2004. IEEE Computer Society. Warfield, J. N. (1973). Intent Structures. IEEE Transactions on Systems, Man, and Cybernetics, 3(2). Yu, S., Kim, Y., & Kim, M. (2004). Linking Organizational Knowledge Management Drivers to Knowledge Management Performance: An Exploratory Study. HICSS37. IEEE Computer Society.
273
Strategies for Successful Implementation of KM in a University Setting
aPPendix a: iSM MetHodologY ISM analyzes a system of elements and resolves these in a graphical representation of their directed relationships and hierarchical levels. The elements may be objectives of a policy, goals of an organization, factors of assessment, etc. The directed relationships can be in a variety of contexts (referred to as contextual relationships), such as Element (i) “is greater than”; “is achieved by”; “will help achieve”; “is more important than”; Element (j). The following is a brief description of the different steps of ISM: • • •
•
•
•
•
•
274
Identification of Elements: The elements of the system are identified and listed. This may be achieved through research, brain storming, etc. Contextual Relationship: A contextual relationship between elements is established, depending upon the objective of the modeling exercise. Structural Self Interaction Matrix (SSIM): This matrix represents the respondent’s perception of element to element directed relationship. Four Symbols are used to represent the type of the type of relationship that can exist between two elements of the system under consideration. These are: 1. for the relation from element Ei to Ej, but not in the reverse direction; 2. for the relation from Ej to Ei, but not in the reverse direction; 3. for an interrelation between Ei and Ej (both directions); 4. to represent that Ei and Ej are unrelated. Reachability Matrix (RM): A Reachability Matrix is then prepared that converts the symbolic SSIM Matrix into a binary matrix. The following conversion rules apply: ◦ If the relation Ei to Ej = V in SSIM, then element Eij = 1 and Eji = 0 in RM ◦ If the relation Ei to Ej = A in SSIM, then element Eij = 0 and Eji = 1 in RM ◦ If the relation Ei to Ej = X in SSIM, then element Eij = 1 and Eji = 1 in RM ◦ If the relation Ei to Ej = O in SSIM, then element Eij = 0 and Eji = 0 in RM The initial RM is then modified to show all direct and indirect reachabilities, that is if Eij = 1 and Ejk = 1 then Eik = 1. Level Partitioning: Level partitioning is done in order to classify the elements into different levels of the ISM structure. For this purpose, two sets are associated with each element Ei of the system - A Reachability Set (Ri) that is a set of all elements that can be reached from the element Ei, and an Antecedent Set (Ai), that is a set of all elements that element Ei can be reached by. In the first iteration, all elements, for which Ri = Ri∩Ai, are Level I Elements. In successive iterations, the elements identified as level elements in the previous iterations are deleted, and new elements are selected for successive levels using the same rule. Accordingly, all the elements of the system are grouped into different levels. Canonical Matrix: grouping together elements in the same level develops this matrix. The resultant matrix has most of its upper triangular elements as 0, and lower triangular elements as 1. This matrix is then used to prepare a Digraph. Digraph: Digraph is a term derived from Directional Graph, and as the name suggests, is a graphical representation of the elements, their directed relationships, and hierarchical levels. The initial digraph is prepared on the basis of the canonical matrix. This is then pruned by removing all transitivities, to form a final digraph.
Strategies for Successful Implementation of KM in a University Setting
•
Interpretive Structural Model: The ISM is generated by replacing all element numbers with the actual element description. The ISM therefore, gives a very clear picture of the system of elements and their flow of relationships.
275
Strategies for Successful Implementation of KM in a University Setting
aPPendix b: reSearcH reSultS Reachability Matrix Element 1: 1 0 0 0 0 0 0 0 0 0 0 0 0 Element 2: 0 1 0 0 0 1 1 0 0 0 0 0 1 Element 3: 1 0 1 0 0 1 0 0 0 0 1 0 1 Element 4: 0 0 0 1 1 0 0 0 0 1 0 1 0 Element 5: 0 0 0 0 1 0 0 0 0 0 0 0 0 Element 6: 0 0 0 0 0 1 0 0 0 1 0 0 0 Element 7: 0 0 0 0 1 0 1 0 1 1 0 0 0 Element 8: 0 0 1 0 0 0 0 1 0 0 1 0 1 Element 9: 0 0 0 0 0 0 0 0 1 0 0 0 0 Element 10: 0 0 0 0 0 0 0 0 0 1 0 0 0 Element 11: 0 0 0 0 1 0 1 0 0 0 1 0 0 Element 12: 0 0 0 0 0 0 0 0 0 1 0 1 0 Element 13: 0 0 0 0 0 1 0 0 0 0 0 0 1
Canonical matrix Element 01: Level 1: 1 0 0 0 0 0 0 0 0 0 0 0 0 Element 05: Level 1: 0 1 0 0 0 0 0 0 0 0 0 0 0 Element 09: Level 1: 0 0 1 0 0 0 0 0 0 0 0 0 0 Element 10: Level 1: 0 0 0 1 0 0 0 0 0 0 0 0 0 Element 06: Level 2: 0 0 0 1 1 0 0 0 0 0 0 0 0 Element 07: Level 2: 0 1 1 1 0 1 0 0 0 0 0 0 0 Element 12: Level 2: 0 0 0 1 0 0 1 0 0 0 0 0 0 Element 04: Level 3: 0 1 0 1 0 0 1 1 0 0 0 0 0 Element 11: Level 3: 0 1 1 1 0 1 0 0 1 0 0 0 0 Element 13: Level 3: 0 0 0 1 1 0 0 0 0 1 0 0 0 Element 02: Level 4: 0 1 1 1 1 1 0 0 0 1 1 0 0 Element 03: Level 4: 1 1 0 1 1 1 0 0 1 1 0 1 0 Element 08: Level 5: 1 1 1 1 1 1 0 0 1 1 0 1 1
Modified Reachability Matrix Element 1: 1 0 0 0 0 0 0 0 0 0 0 0 0 Element 2: 0 1 0 0 1 1 1 0 1 1 0 0 1 Element 3: 1 0 1 0 1 1 1 0 0 1 1 0 1 Element 4: 0 0 0 1 1 0 0 0 0 1 0 1 0 Element 5: 0 0 0 0 1 0 0 0 0 0 0 0 0 Element 6: 0 0 0 0 0 1 0 0 0 1 0 0 0 Element 7: 0 0 0 0 1 0 1 0 1 1 0 0 0 Element 8: 1 0 1 0 1 1 1 1 1 1 1 0 1 Element 9: 0 0 0 0 0 0 0 0 1 0 0 0 0 Element 10: 0 0 0 0 0 0 0 0 0 1 0 0 0 Element 11: 0 0 0 0 1 0 1 0 1 1 1 0 0 Element 12: 0 0 0 0 0 0 0 0 0 1 0 1 0 Element 13: 0 0 0 0 0 1 0 0 0 1 0 0 1
Direct Reachability Matrix Element 01: Level 1: 0 0 0 0 0 0 0 0 0 0 0 0 0 Element 05: Level 1: 0 0 0 0 0 0 0 0 0 0 0 0 0 Element 09: Level 1: 0 0 0 0 0 0 0 0 0 0 0 0 0 Element 10: Level 1: 0 0 0 0 0 0 0 0 0 0 0 0 0 Element 06: Level 2: 0 0 0 1 0 0 0 0 0 0 0 0 0 Element 07: Level 2: 0 1 1 1 0 0 0 0 0 0 0 0 0 Element 12: Level 2: 0 0 0 1 0 0 0 0 0 0 0 0 0 Element 04: Level 3: 0 0 0 0 0 0 1 0 0 0 0 0 0 Element 11: Level 3: 0 0 0 0 0 1 0 0 0 0 0 0 0 Element 13: Level 3: 0 0 0 0 1 0 0 0 0 0 0 0 0 Element 02: Level 4: 0 0 0 0 0 0 0 0 0 1 0 0 0 Element 03: Level 4: 1 0 0 0 0 0 0 0 1 1 0 0 0 Element 08: Level 5: 0 0 0 0 0 0 0 0 0 0 0 1 0
Element Level 1 1, 5, 9, 10, 2 6, 7, 12, 3 4, 11, 13, 4 2, 3, 5 8,
276
277
Chapter 15
DYONIPOS:
Proactive Knowledge Supply Silke Weiß Federal Ministry of Finance, Austria Josef Makolm Federal Ministry of Finance, Austria Doris Ipsmiller m2n consulting and development gmbh, Austria Natalie Egger Federal Ministry of Finance, Austria
abStract Traditional knowledge management is often combined with extra work to recollect information which is already electronically available. Another obstacle to overcome is to make the content of the collected information easily accessible to enquiries, as conventional searching tools provide only documents and not the content meaning. They are often based on the search for character strings, usually resulting in many unnecessary hits and no or less context information. The research project DYONIPOS focuses on detecting the knowledge needs of knowledge users and automatically providing the required knowledge just in time, while avoiding additional work and violations of the knowledge worker’s privacy, proposing a new way of support. This knowledge is made available through semantic linkage of the relevant information out of existing artifacts. In addition DYONIPOS creates an individual and an organizational knowledge base just in time.
tHe “Knowledge ManageMent ProceSS Model” according to ProbSt Knowledge is a particular property which increases through division. Knowledge loss implies a high DOI: 10.4018/978-1-60566-709-6.ch015
risk for organizations, making it very important to keep the existing knowledge. It can be classified into two types, explicit and the implicit knowledge. Explicit knowledge is knowledge which has been documented and is easy to communicate. The implicit knowledge is the knowledge which can be found in someone’s head. This knowledge cannot be simply or formally described. Making existing
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
DYONIPOS
explicit and implicit knowledge useful is one of the major objectives of an organization. In the past, several knowledge management tools were developed to structure knowledge and to make knowledge transparent and available. The “Knowledge Management Process Model” (Probst, Raub, & Romhardt, 2006) of Gilbert Probst has many common components of a management process. Probst builds up knowledge management on six core processes and completes these processes through a start and an evaluation process. The component definition makes it possible to structure the knowledge management process into different logical phases, making it possible to intervene within a process where some knowledge lack had occurred. Hence the component definition supplies a raster to search for the causes of knowledge problems. The core processes are: • • • • • •
knowledge identification, knowledge acquisition, knowledge development, knowledge distribution, knowledge storage and knowledge application.
The process starts with the definition of knowledge objectives that gives the knowledge management a direction and ends with the evaluation and measurement of the gathered knowledge. To define the knowledge objectives, it is important to identify the knowledge which is important for the future, defining it as the so called “critical knowledge”. In the first process step called “knowledge identification”, the existing internal and external data and knowledge sources as well as the abilities are localized and evaluated, relating it to their importance for a particular task handling. This is possible through the acquisition of so called “yellow pages” such as knowledge landscape, where it becomes obvious who owns which knowledge and how to gain new knowledge. Gilbert Probst and Kai Romhardt state that without
278
human resources only through technology the necessary transparency of an organization cannot be established (Probst, Romhardt, n.d.). Humans need to transfer the knowledge using, for example, discussions via knowledge platforms. Within the “knowledge acquisition” process an organization collects knowledge from other businesses, stakeholders or external knowledge owners because through the flood of information, organizations are not able to allocate the know-how internally. The knowledge development component is based on the research of processes which are responsible for the creation of new abilities, new products, better ideas and more efficient processes. The knowledge needs are then linked with the knowledge sources during the “knowledge distribution” process e.g. to transfer best practice experiences. Through the sub-process “knowledge storage” it is ensured that the identified knowledge which is important as well as experiences from the application environment are registered, then becoming available for further problems and tasks. In the “knowledge application” process an observation of knowledge is used in business processes in order to solve particular problems. The organization has to assure that the gathered knowledge will be provided in an appropriate manner and timing. Thereby the data quality plays an important role, being supplied using user-friendly interfaces resulting in reliable data. The final step includes the knowledge evaluation which is responsible for the continuous adjustment between the knowledge objectives and the evaluated results of the sub-processes. This evaluation is very difficult because no consensus is reached on a consistent measure. The knowledge management interpreted according to Probst and Romhardt proposes additional work for the knowledge workers because they have to search for the knowledge they need. They have to know where they can find this knowledge and how to access it. First the existing knowledge has to be collected and should be structured before the knowledge can be accessed. These process steps also create additional work.
DYONIPOS
While handling knowledge management processes, a knowledge worker needs a certain level of freedom to perform his tasks. Common systems often face obstacles by action-taking or decisionmaking. The unnecessary work concerning the search and collection of knowledge which includes also paperwork, makes the knowledge management processes slower and more difficult. The implementation of user-unfriendly applications can be reduced through integration of important stakeholders in the development process (Makolm, Weiß, Reisinger, 2007). According to Probst, the knowledge workers do not know, where they can find the knowledge they need or which person they can contact (Probst, Romhardt, n.d.). The model implies a lot of effort because all phases in the system have to be executed. Not all of these proposed steps seem to be equally important to collect knowledge. In addition it is not ensured that a knowledge worker hands over his knowledge to further persons or organizational levels. In sum it can be said that the best management system is not useful if the employees do not support the system by providing their know-how and using the system. The self-acting system DYONIPOS – Dynamic Ontology based Integrated Process Optimisation – is based on a totally new approach. DYONIPOS produces no additional work because the knowledge is delivered automatically, in a proactive way and just in time considering the knowledge needs of the users. Furthermore the DYONIPOS project ensures innovative results through the integration of all important stakeholders in the development process by means of a joint venture of research, economy and public administration. A premise of DYONIPOS is that no additional work should be generated for the knowledge workers. Knowledge is extracted out of existing artifacts as produced by its users and the structuring is carried out automatically. This article is structured as follows: section 1 provides a short introduction into the topic of knowledge management by describing and pon-
dering the knowledge management process model according to Gilbert Probst studies. In section 2 the relations between the research objectives and e-Government, taken as a case study, are presented as well as the knowledge management process with and without the support of DYONIPOS. Section 3 presents the premises and challenges of the projects that are listed and in section 4 a description of DYONIPOS technologies is done. Section 5 specifies the project settings, the results of the first and the second test phase as well as the next development steps. The article concludes with the presentation of the success factors, risks and benefits of DYONIPOS project.
Knowledge ManageMent and e-goVernMent “Knowledge is relevant information in context;” this is the short underlying definition of knowledge within the research project DYONIPOS. Public sector knowledge has always played a central role because of its economical importance and supplying of public services without knowledge is not possible at all. Fact is that knowledge workers need more and more knowledge to perform their daily work and even more knowledge is needed for the execution of several tasks and the ad-hoc part of processes increases constantly. In addition knowledge acquisition becomes more complex because the amount of information rises steadily and heterogeneous systems are in use. Furthermore the multitude of located information hinders the selection of the knowledge really needed. This yields thereto that existing knowledge gaps always grow correspondently and that existing information which supports a task is not used. Often the knowledge workers are not aware of knowledge gaps as well as of existing information. It is also very time-consuming to search for information in the amount of different sources and to formulate and propose enquiries to conventional searching tools. These searching tools often deliver not the
279
DYONIPOS
wished results displaying unimportant information and missing those necessary. The unwished information can only be deleted after reviewing and considering as not important. In e-Government scenarios, specifically, the use of information and communication technologies to improve and exchange services is critical, as it implies in better quality work to the people and economy.
Knowledge Management as done until now? A typical knowledge worker task would be to shortly process a topic which is very important and interesting for him. He neither knows where the corresponding information about this topic is stored nor which colleagues he can ask in this regard. To get an overview about the required information, a knowledge worker usually searches in different sources (e.g. several server devices, the own hard drive, on the internet, e-mail archive, specific applications for example the electronic record system (ELAK1), etc.) by using different searching tools. Thereby he has to dispose each search enquiry separately. Knowledge workers dedicate much time and effort with search for and analysis of information. A questioning within the project DYONIPOS had the following result: key-users spend daily between 2.1% and 60% of their working time with enquiries (average value: 16.1%). This search and analysis time can be reduced significantly (Makolm, Weiß, 2007).
Knowledge Management with Support of dYoniPoS Through the use of DYONIPOS, a knowledge worker does not need to search actively in different sources with different searching tools. DYONIPOS provides the knowledge autonomously where and when it is needed, suitable to the corresponding working context of the knowledge worker. DYONIPOS learns from the user’s interactions with his systems, from the activities he performs
280
and from his analysis of the information resources accessed. Hence DYONIPOS deduces the working context [KRGL6], (Kröll, Rath, Weber, Lindstaedt, Granitzer, 2007). DYONIPOS recognizes for example if the knowledge worker prepares a presentation with certain content or if he looks for certain information on the internet. The individual knowledge base is developed out of the extracted concepts, relationships and the user interactions. The system identifies the need of information out of the detected context and takes automatically optimized enquires to the semantic consolidated knowledge base. The individual assistant DYONIPOS searches for relevant information, e.g. for documents, web sites, electronic records, for colleagues who can help to tackle the current task, for organizations that are engaged with the topic as well as for associated concepts in general. DYONIPOS provides these information resources automatically to the user. A user can also search specifically for information and check resources upon similarities. If desired, DYONIPOS clusters for example the relevant resources automatically to topics and creates an interactive 3D-figure, the so called “topic landscape”. If a knowledge worker uses DYONIPOS he does neither know where the relevant knowledge is stored nor has he to search for it. Furthermore the knowledge workers get information from which they didn’t know that it exists in the organization at all. Through the use of DYONIPOS, knowledge workers do not have to search in different sources with several searching tools. Further, the supplied information (associated concepts like the knowledge owner, association graph, topic landscapes) facilitates the screening of the search results. Transaction and process costs can be reduced considerably through the reduction of the searching time. Explicit knowledge enquiries cease to exist. In addition, this leads to an optimization of the performed workflows. Double work can be reduced because similar existing work is displayed automatically; maybe this work can
DYONIPOS
be adopted after execution of small changes. In addition DYONIPOS makes a contribution to the reduction of e-mail avalanches because knowledge does not have to be exchanged per e-mail any longer; the organizational knowledge is directly offered and available for all knowledge workers.
PreMiSeS DYONIPOS follows the premise to produce no extra work for the knowledge workers and to generate knowledge out of existing artifacts (e.g. texts) so that no additional post-capture of knowledge is necessary. A further premise is that DYONIPOS proactively provides knowledge workers with detected and context-sensitive knowledge. This means that DYONIPOS recognizes changes of the user’s context and provides automatically that knowledge required in the actual context. Furthermore DYONIPOS builds up “on the fly” an individual and an organizational knowledge base which relies on the artifacts produced by the users and on websites released by users to the organizational knowledge base. DYONIPOS supports the knowledge workers in a manner that they can autonomously perform their work. The information which is necessary for the fulfillment of their work is offered proactively and contextsensitively. The information contains on the one hand individual knowledge which has been generated from the knowledge workers themselves and on the other hand organizational knowledge which has been generated by other knowledge workers. Organizational knowledge is the knowledge which is explicitly released in the organizational knowledge base or the knowledge which lies in the by DYONIPOS integrated repositories. Through the proactive and context-sensitive knowledge provision, the quality of services that is delivered is increasingly optimal because the knowledge workers are provided adequately with knowledge available in the organization.
tHe underlYing tecHnologieS In the project DYONIPOS semantic and knowledge discovery technologies were used to develop the proactive assistant (Rath, Kröll, Andrews, Lindstaedt, Granitzer, Tochtermann, 2006). The technological challenges of the project DYONIPOS were the semantic integration of heterogeneous data sources, the semantic harmonization of extracted entities, the automatically self-learning recognition of the user-context, the deduction of information needs of the knowledge worker, the proactive provision of needed information and information sources coming from the personal and the organizational knowledge base as well as the provision of various possibilities of the successional analysis of the relevant hits. The data is virtually mapped and semantically consolidated out of heterogeneous databases via modular connectors in semantic structures. For the representation of the semantic structures RDF(S) and the ontology language OWL was used. The technological bases of the innovative approaches are formed of the “m2n - Intelligence Management” application framework of the m2n consulting and development gmbh as well as the knowledge discovery framework “Know-Miner” of the Know-Center Graz. The “m2n Intelligence Management” application performs model-based data integration, flexible linkage of highly modular services in the form of various program flows as well as the graphical design of the user-interface. “Know-Miner” provides effective knowledge discovery techniques like the recognition of entities, vectorization, clustering, association indexes and classification approaches on the basis of a semantic index. For the acquisition of a user’s work-context the “DYONIPOS Task Recognizer” – a java program – was implemented. Different sensors of the “Context Observer Module” observe the activities of the knowledge worker based on keyboard entries and mouse clicks (events). DYONIPOS uses a key-logger to map the events of the user and stores all rec-
281
DYONIPOS
ognized events in an event protocol (Rath, Kröll, Lindstaedt, Granitzer, 2007). For the reduction of the immense amount of data the mapped events are filtered. All irrelevant data, for example the mouse moves that contain no relevant information, is deleted. After that, the relevant events are bundled to so called event blocks. This is possible through a relation analysis. At this moment, generic application-based and web browser-based rules are implemented (Rath, 2007). These rules assign events, for example on the basis of the file-name as well as the names of the application of the currently opened window. Further rules can be added with little effort (Kröll, Rath, Granitzer, Lindstaedt, Tochtermann, 2006), (Kröll, Rath, Weber, Lindstaedt, Granitzer, 2007). In a next step the event blocks are assigned to tasks. This happens under the use of k-nearestneighbour-classification, a support vector machine which is based on graph-kernels [KRWL06]. Furthermore tasks can be recognized through clustering because of similarities between content and structural characteristics as well as the scatter/ gather approach. RDF is the key technology of DYONIPOS. All events, event blocks as well as all tasks are encoded through RDF-Triples. This means that all data for example of integrated applications, documents, presentations, e-mails etc. are structurally stored. Through the creation of semantic linkages between the contents of different repositories, new knowledge can be made accessible. Newly recognized resources are for example names of experts or topics of an organization. Through the supply of expert names also implicit knowledge is made accessible. DYONIPOS also offers this newly accessible knowledge proactively and context-sensitively. It has to be noted that only names of official DYONIPOS users are displayed. These users are listed in a so called whitelist. These technologies brought besides new possibilities of representation, analysis and knowledge management also new challenges. Through the automatic recognition, development and deposit-
282
ing of person-related data, questions concerning data protection and privacy were brought up. These questions were discussed with the Austrian data protection commission and with the staff council of the Federal Ministry of Finance. Parallel implementation of the funded research project and the use-case project made it possible to exchange ideas between research and practice constantly; this was useful for both projects. Furthermore the inclusion of all stakeholders – such as researchers, users, IT-experts and also the staff council – in the development process assures that the results of the research project DYONIPOS can and will be transformed optimally and in real time into a practical application (Makolm, Orthofer, 2007).
tHe deVeloPMent ProceSS of dYoniPoS The prototype DYONIPOS was developed in a joint venture of research, economy and public administration. The development occurred in two projects, the research project and the use-case project DYONIPOS. These projects complete each other through knowledge transfer and cooperation. The DYONIPOS research consortium consists of the m2n consulting and development gmbh2, the Know-Center Graz3, the Institute for Information Systems and Computer Media (IICM) of the Graz University of Technology4 as well as HPAustria5. The objective of the research project was to design and to develop the pilot software DYONIPOS. The research project started on January 2nd, 2006 and was completed by the first quarter of 2008. It was funded by the “semantic systems” program within FIT-IT, an Austrian research program provided by the Federal Ministry of Transport, Innovation and Technology (BMVIT)6. At the beginning the major objective was to design and to develop the first prototype DYONIPOS. After this, the first prototype was tested within the first step of the parallel implemented
DYONIPOS
use-case project. The use-case project is structured into three phases, the first test, the second test and the final test. Through these tests key-users had the chance to actively take part in the design process of the software. Within the research project two refinement steps were planned. All tests served as a basis to support the improvement of the DYONIPOS functions. In December 2007 the second test of a fundamental new version of the prototype was started. The final test of the prototype was scheduled for the end of July 2008. After the final test DYONIPOS will be finished. A second research project the “DYONIPOS II” funded by COMET (an Austrian research program) follows the first research project for a further development of DYONIPOS and to transfer the pilot software into a productive and user-friendly application. Therefore a refinement project will be carried out within the Directorate General of Information Technology (DG-IT) of the Austrian Federal Ministry of Finance. It is again the aim of the refinement project to bring the scientific results of the research project DYONIPOS II to practical use. Further on, a roll-out project is planned. The objective of the roll-out project is that DYONIPOS will support approximately 100 employees of the DG-IT till the middle of 2009. Within the first DYONIPOS project the prototype was improved by incorporating the wishes and needs of the key-users. In the first test, the basis of the information collection consisted of questionnaires and protocols. Within the second test the key-users could report failures that occurred during the test per e-mail and Wiki or via a “Bugtracking-Tool“. Furthermore the staff department informed about important project results and their opinion was obtained concerning data protection or data security of the project.
the first test Phase The first test took place from April to May 2007. Ten key users participated in the first test of the pilot software. In this test period, all key-users used
the client software DYONIPOS which was stored on each key-user’s computer. In this first phase the test included only the client components and no server installation. Initial courses of instruction in groups as well as individual courses aligned with the abilities of the particular key-user took place to show the key-users the handling of DYONIPOS in practice and to prepare the key-users optimally for the tests. In addition the functional spectrum of DYONIPOS was presented practically. Before and during the test the executives and key-users were informed per e-mail about important project results. The first test was used to collect information about the daily work of a knowledge worker and this gathered data was semantically enriched through the manual assignment of details about the processes, tasks and events of the key-users. They made proposals which detected events respectively event blocks should be composed into a task and which tasks should be summarized to a process. Furthermore they performed the naming of the tasks respectively the processes. A personal knowledge base was built from the resources and activities that were adjoined by the key-users, serving as a basis to teach DYONIPOS the operation methods of the key-users. After the refinement of the first version, the assignment of event blocks to a task respectively the assignment of tasks to a process were taken autonomously by DYONIPOS. In addition the key-users were asked to test the first version of the software by checking the functionalities as well as the graphical user interface of DYONIPOS critically and to give constructive suggestions for an improvement. Requirements were collected directly by the key-users, thus developers gained a realistic insight into the daily knowledge work. The key-users documented their experiences with the software during the test phase as well as the suggestions for improvement by filling out a questionnaire and by keeping a test protocol. The feedback of the key-users was continuously positive. The analysis of the questionnaires, protocols and the log files leads to the following results which are
283
DYONIPOS
in turn implications for the further development of the DYONIPOS system. The first result was that huge parts of the knowledge work were carried out via the computer. Exceptions were individual discussions and conceptual thinking work. MS Word, GroupWise, MS Excel, Internet, MS Internet Explorer, ELAK, SQL-Navigator and further specific applications were the most frequently used software applications. Reasons for the interruption of work are often telephone calls, appointments or urgent e-mails. The time which users spent for searching and collecting information varied from a few minutes up to several hours a day. Frequently required information are often documents stored on the own hard drive and information provided via the internet. Further sources are server drives but also other knowledge owners like colleagues or friends; very important is also the participation in seminars. Specific wishes of the key-users were that the DYONIPOS window should be visible even if other program windows are maximized open, extension of searching functionalities e.g. the search for documents within a period or a point of time or the possible usage of wildcards. Some key-users wanted more features to adjust the software; others wished more support and suggestions from the DYONIPOS system. In addition there were interface requirements, concerning inconsistency of the start and stop button, a reminder to switch-on DYONIPOS again in case a user had switched off the system, and about the parallel implementation of English and German terms in the user interface which was perceived as being distracting. Several suggestions related to the creation of tasks were made. For example an extra button was wished to create a task. In the first vision of the software, tasks could only be created through searching and choosing the function in a pull-down menu. The key-users perceived the logging of DYONIPOS on the one hand too much and on the other hand too less. The log-
284
ging of each intermediary step, for example the click through cascaded folders, was perceived as unimportant. The declaration of the whole URL of the last opened web site is wished and not only the information about the homepage. Furthermore displaying the subject of an e-mail in addition to the e-mail address would be helpful. After setting-up the DYONIPOS software, operational performance problems of the system software and other applications were supposed. All enumerated challenges had to be managed within the first refinement process. The first test proceeded very well due to the engagement of the key-users and offered a solid basis for further development and extended implementation of the second prototype DYONIPOS. To integrate more applications, additional sensors were developed and the graphical user interface was adapted according to the requirements of users to gain a user-friendly application.
tHe Second teSt PHaSe The second test phase was started in January 2008 and took approximately two months. A fundamentally improved version of the prototype DYONIPOS which establishes an organizational knowledge base with new functionalities and which also includes artifacts stored on the server as well as electronic records was tested by 13 key-users. In the second test, the former manual assignment of event blocks to tasks worked automatically. A key-user just observed this assignment by doing corrections of wrongly assigned event blocks and by confirmations of correctly assigned event blocks. Results of the second test phase were the following helpful key-user suggestions: For a knowledge worker more detailed context information of an information resource would be helpful. Furthermore the direct opening of search results in the concerned operational application saves time. Additionally the analysis of individual
DYONIPOS
search results could be beneficial. Another main point was that the training of the tasks is too timeconsuming. Also the conclusions were drawn, that the detection of tasks is absolutely sufficient to support the knowledge worker with the appropriate knowledge. Therefore the second prototype enabled the classification of detected resources and the visualization of associated concepts displayed through a star-shaped graph and topic landscapes. In the topic landscape, resources that are thematically similar are mapped closely together. Further the selection of different resources was possible as well as to display “how similar they are”. In addition DYONIPOS allows the selection of artifacts according to sources e.g. file system, KOMPASS (a proprietary database containing the staff and their roles and authorizations), electronic records (ELAK) and Web. Through the identified information needs, the DYONIPOS task recognizer also indicates associated persons with certain concepts. The graph that showed these concepts was, as mentioned above, mapped star-shaped for example with the name of a person in the centre. In the association graph, persons, organizations etc. were identified and connected with several topics linked to the information needed. Furthermore it was displayed, showing company and its respective department where the user works and his contact details. In the graph also information about other projects assigned to that user and with what concepts he was associated was exhibited. Links to further information were also available by clicking on a symbol, relating for example if the person is responsible for semantic technologies.
next Steps Finally the third test phase was scheduled to start in Mai 2009. The implemented user requirements as well as research results from the first and the second test phase were successfully implemented in the system, becoming available for this phase. In the third testing phase, the key-users also will
test the whole functionalities of DYONIPOS, ending with an evaluation and documentation of the use-case results in a final project report. Parallel to the final test, first preparations for the refinement of the prototype DYONIPOS into productive software will be made. The roll-out – including e.g. the set-up of an adequate infrastructure – will start on the first of June 2009. This paper reflects the status of the DYONIPOS project per February 2009. In the meanwhile the project was finished successfully and DYONIPOS is productively used in the Austrian Federal Ministry of Finance. Further usage in other organizations is in discussion.
SucceSS factorS, riSKS and benefitS of dYoniPoS The previous success of the project relies on a highly successful joint venture between research, economy and public administration and the modern project structure which allows the parallel implementation of a research and use-case project. Such project structures have been proven as successful for the work on uncertain and highly dynamic questions. Through the parallel structure on the one hand the direct application of new technologies is proved in practice and on the other hand practice guides research in a successful direction to create user-friendly applications. The testing of this joint venture approach was another objective of the project DYONIPOS. This approach will be used further on in the DG-IT at the ministry of finance because of the huge success which could be drawn through the know-how transfer between research and practice. In the research and the use-case project, a notable high part of the communication was carried out per electronic media. Cross-organizational distributed project teams (located in the cities Vienna, Linz, and Graz) were established between the project partner’s research, economy and public administration because of the modular project structure. These project groups
285
DYONIPOS
communicated per e-mail as well as via several internet applications (Bugtracker, Wiki). Through the use of these technologies good results in a short time could be reached. Like every knowledge management project, also this project had risks and barriers that had to be handled. The main risk was that the staff of the Directorate General of Information Technology of the Federal Ministry of Finance would not accept or use DYONIPOS. This risk was treated through comprehensive key-user tests and by involvement of all stakeholders in the software development process. The objective of all concerned parties was that a user-friendly application could be developed and be used intuitively without a big training effort. Furthermore, an information campaign will be started before the roll-out project begins. DYONIPOS will be published in several newspapers, subscribed e-mail lists, in the intranet and on the internal screen saver. It is also beneficial for outsiders to publish the experiences of the project because they can learn from the mistakes and the success made within the project. A further reason for the rejection of users could be data protection or privacy concerns. It is important to inform users about transparency, data protection and privacy principles and agreements with the staff council and to provide trust. DYONIPOS only provides knowledge that would also be available for the users without using DYONIPOS. Additional provided knowledge must be explicitly released by the user. Furthermore users can remove themselves from the system without the indication of any reasons. DYONIPOS enables the reduction of process costs because through the use of DYONIPOS knowledge workers can reduce the time that they spend on searching for relevant information, saving time to conduct other tasks. Additionally, knowledge workers of the Austrian Federal Ministry of Finance can react faster and more exactly on questions of customers (citizens and businesses) because DYONIPOS also supports the active semantically-based search for and analysis
286
of information. Through the implementation of DYONIPOS in the public administration, administrative services can be provided optimally. For this reason, DYONIPOS helps to strengthen Austria as a business location. It is possible, especially through the use of DYONIPOS in the DG-IT of the Federal Ministry of Finance, that e-Government solutions are implemented efficiently. The “virtuality” of cooperation was increased fundamentally through the use of DYONIPOS, particularly because of the proactive exchange of organizational knowledge as well as by the display of knowledge owners. DYONIPOS delivers knowledge to the knowledge workers that is useful for them. The virtual cooperation is fostered through the contextsensitive display of knowledge owners. These are potential conversational partners who can provide information in a concrete work situation.
referenceS Kröll, M., Rath, A., Granitzer, M., Lindstaedt, S., & Tochtermann, K. (2006). Contextual Retrieval in Knowledge Intensive Business Environments. GI-Workshop Information Retrieval. Kröll, M., Rath, A., Weber, N., Lindstaedt, S., & Granitzer, M. (2007). Task Instance Classification via Graph Kernels, Mining and Learning with Graphs. Italy: ICDM Workshop. Makolm, J., & Orthofer, G. (2007). Holistic Approach, Stakeholder Integration and Transorganizational Processes: Success Factors of FinanzOnline. In E-Taxation: State & Perspectives, E-Government in the Field of Taxation: Scientific Basis, Implementation Strategies, Good Practice Examples. Series Informatics, 21, 389-402.
DYONIPOS
Makolm, J., & Weiß, S. (2007). Forschung und Praxis Hand in Hand. In Zeitschrift des Telematik-Ingenieur-Verbandes TIV, Telematik 1/2007 TELEkommunikation & InforMATIK, Knowledge Worker Productivity, Web2.0 und Semantic Web: Was bringt`s für Unternehmen? (p. 12). Use Case DYONIPOS. Makolm, J., Weiß, S., & Reisinger, D. (2007). Proactive Knowledge Management: The DYONIPOS Research and Use Case Project. In Proceedings of the first Conference on Theory and Practice of Electronic Governance – ICEGOV 2007. (p. 85). Macao. Probst, G., Raub, S., & Romhard, K. (2006). Wissen managen: Wie Unternehmen ihre wertvollste Ressource optimal nutzen. Berlin, Germany: Gabler Verlag. Probst, G., & Romhardt, K. (n.d.). Bausteine des Wissensmanagements - ein praxisorientierter Ansatz. Retrieved from http://wwwai.wu-wien. ac.at/~kaiser/seiw/Probst-Artikel.pdf. Rath, A. (2007). A Low-Level Based Task And Process Support Approach for Knowledge-Intensive Business Environments. In Proceedings of the 5th International Conference on Enterprise Information System Doctoral Consortium DCEIS 200 (pp. 35-42). Madeira, Portugal. June 12, 2007.
Rath, A., Kröll, M., Andrews, K., Lindstaedt, S., Granitzer, M., & Tochtermann, K. (2006). Synergizing Standard and Ad-Hoc Processes. In Proceedings of the 6th International Conference on Practical Aspects of Knowledge Management. Vienna, Austria: LNCS Springer. Rath, A., Kröll, M., Lindstaedt, S., & Granitzer, M. (2007). Low-Level Event Relationship Discovery for Knowledge Work Support. In Proceedings of the 4th Conference on Professional Knowledge Management, “ProKW2007 Productive Knowledge Work: Management and Technological Challenges”. Potsdam, Germany. GITO-Verlag. Berlin, March 28.–30, 2007.
endnoteS 1
2 3 4
5
6
Abbreviation of “elektronischer Akt” which means the Austrian electronic records management system. http://www.m2n.at http://en.know-center.at/ http://www.iicm.tu-graz.ac.at/rootcollectio n?timestamp=1188552118992 http://welcome.hp.com/country/uk/en/welcome.html http://www.bmvit.gv.at/en/index.htm
287
288
Compilation of References
Aaker, D. A. (1998). Strategic marketing management. New York: John Wiley. Aaker, D. A., Kumar, V., & Day, G. S. (1998). Marketing research. New York: John Wiley. Aaker, D. (1989). Managing Assets and Skills: the Key to a Sustainable Competitive Advantage. California Management Review, 31, 91–106. Ackermann, F., Franco, L. A., Gallupe, R. B., & Parent, M. (2005). GSS for Multi-Organizational Collaboration: Reflections on Process and Content. Group Decision and Negotiation, 14(4), 307–331. doi:10.1007/s10726005-0317-4 ActKM discussion forum. (2006). On Governance. Retrieved 20 February, 2006, from http://www.actkm.org.au Agarwal, R., Krudys, G., & Tanniru, M. (1997). Infusing learning into an information systems organization. European Journal of Information Systems, 6(1), 25–40. doi:10.1057/palgrave.ejis.3000257 Alavi, M., & Leidner, D. E. (2001). Knowledge management and knowledge management systems: Conceptual foundations and research issues. Management Information Systems Quarterly, 25(1), 107–133. doi:10.2307/3250961 Alavi, M., Kayworth, T. R., & Leidner, D. E. (2005). An Empirical Examination of the Influence of Organizational Culture on Knowledge Management Practices. Journal of Management Information Systems, 22(3), 191–224. doi:10.2753/MIS0742-1222220307
Alavi, M., & Leidner, D. E. (1999). Knowledge Management Systems: Emerging Views and Practices from the Field. In Proceedings of the 32nd Hawaii International Conference on System Sciences. Hawaii, USA: IEEE Computer Society Press. Alavi, M., & Leidner, D. E. (1999). Knowledge Management Systems: Issues, Challenges, and Benefits”, Communications of the AIS, 1. Almashari, M., Zairi, M., & Alathari, A. (2002). An Empirical Study of the Impact of Knowledge Management on Organizational Performance. Journal of Computer Information Systems, 42(5), 74–82. Amabile, T. (1996). Creativity in context. Boulder, CO: Westview Press. Amabile, T. (1997). Motivating creativity in organizations: On doing what you love and loving what you do. California Management Review, 40(1), 39–58. Amar, A. (2002). Managing Knowledge Workers. Westport: Quorum Books. Anantatmula, V. S. (2005). Outcomes of Knowledge Initiatives. International Journal of Knowledge Management, 1(2), 50–67. Anantatmula, V., & Kanungo, S. (2006). Structuring the Underlying Relations among the Knowledge Management Outcomes. Journal of Knowledge Management, 10(4), 25–42. doi:10.1108/13673270610679345
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
Compilation of References
Anantatmula, V. S. P. (2007). Linking KM effectiveness attributes to organizational performance. VINE: The Journal of information and knowledge management systems, 37(2), 133-149. Anantatmula, V., & Kanungo, S. (2005). Establishing and Structuring Criteria for Measuring Knowledge Management Efforts. In Proceedings of the 38th Hawaii International Conference on System Sciences. Anderson, J. C., & Gerbing, D. W. (1988). Structural equation modeling in practice: A review and recommended two-step approach. Psychological Bulletin, 103(3), 411–423. doi:10.1037/0033-2909.103.3.411 Anson, R., Bostrom, R. P., & Wynne, B. (1995). An Experiment Assessing GSS and Facilitator Effects on Meeting Outcomes. Management Science, 41(2), 189–208. doi:10.1287/mnsc.41.2.189 April, K. A. (2002). Guidelines for developing a K-strategy. Journal of Knowledge Management, 6(5), 445–456. doi:10.1108/13673270210450405 Argote, L., McEvily, B., & Reagans, R. (2003). Managing Knowledge in Organizations – An Integrative Framework and Review of Emerging Themes. Management Science, 49(4), 571–582. doi:10.1287/mnsc.49.4.571.14424 Arnett, D. B., Laverie, D. A., & Meiers, A. (2003). Developing parsimonious retailer equity indexes using partial least squares analysis: A method and applications. Journal of Retailing, 79, 161–170. doi:10.1016/S00224359(03)00036-8 Arthur Andersen. (1996). The knowledge management practices book: A guide to who’s doing what in organizational knowledge management. Arthur Andersen Consulting. Ashok, G. (1999). Business driven research & development: managing knowledge to create wealth. West Lafayette: Ichor Business.
Asoh, D. A., Belardo, S., & Crnkovic, J. (2005). Computing the Knowledge Management Index: Validation of the instrument. Paper presented at the 16th Information Resources Management Association Conference (IRMA 2005), San Diego, California. Asoh, D., Belardo, S., & Crnkovic, J. (2002). Modeling and constructing the Knowledge Management Index of organizations. Paper presented at the 6th World Multiconference on Systemics, Cybernetics, and Informatics (SCI 2002), Orlando, FL. Asoh, D., Belardo, S., & Crnkovic, J. (2004). The relationship between the Knowledge Management Index and organizational performance: A preliminary empirical analysis. Paper presented at the 15th International Information Resource Management Association Conference (IRMA 2004), New Orleans, Louisiana. Bagozzi, R. P., Yi, Y., & Phillips, L. W. (1991). Assessing Construct Validity in Organizational Research. Administrative Science Quarterly, 36, 421–458. Bahmanziari, T., Pearson, J. M., & Crosby, L. (2003). Is trust important in technology adoption? A policy capturing approach. Journal of Computer Information Systems, 43(4), 46–54. Bajwa, D. S., Lewis, L. F., & Pervan, G. (2003). Adoption of Collaboration Information Technologies in Australian and US Organizations: A Comparative Study, Proceedings of the 40th Hawaii International Conference on System Sciences. Retrieved April 17, 2009 from http://www2.computer.org/plugins/dl/pdf/proceedings/ hicss/2003/1874/01/187410017c Bakker, M., Leenders, R. T. A. J., Gabbay, S. M., Kratzer, J., & Van Engelen, J. M. L. (2006). Is trust really social capital? Knowledge sharing in product development projects. The Learning Organization, 13(6), 594–605. doi:10.1108/09696470610705479 Bals, C., Smolnik, S., & Riempp, G. (2007). A Case for Integrated Knowledge Management. In Proceedings of the 4th Conference Professional Knowledge Management: Experiences and Visions. Berlin, Germany: GITO.
289
Compilation of References
Bals, C., Smolnik, S., & Riempp, G. (2007). Assessing User Acceptance of a Knowledge Management System in a Global Bank: Process Analysis and Concept Development. In Proceedings of the 40th Hawaii International Conference on System Sciences. Hawaii, USA: IEEE Computer Society Press. Barclay, D., Higgins, C., & Thompson, R. (1995). The Partial Least Square (PLS) approach to casual modeling: Personal computer adoption and use as an illustration. Technology Studies, 2(2), 285–309. Barna, Z. (2003). Knowledge Management: A Critical EBusiness Strategic Factor (Unpublished Master’s Thesis). San Diego State University. Barney, J. (1991). Firm Resources and Sustained Competitive Advantage. Journal of Management, 17(1), 99–120. doi:10.1177/014920639101700108 Barney, J. B. (2001). Is the resource-based “view” a useful perspective for strategic management research? Yes. Academy of Management Review, 26(1), 41–56. doi:10.2307/259393 Barney, J. B. (1986). Organizational culture: Can it be a source of sustained competitive advantage? Academy of Management Review, 11(3), 656–665. doi:10.2307/258317 Barquin, R. C., Bennet, A., & Remez, S. G. (2001). Knowledge Management: The Catalyst for Electronic Government. Vienna, VA: Management Concepts. Barth, S. (2000, October). KM Horror Stories. Knowledge Management Magazine, 3, 37–40. Bayyavarapu, H. B. (2005). Knowledge management strategies and firm performance. London, Ontario: The University of Western Ontario. Becerra-Fernandez, I., & Sabherwal, R. (2001). Organizational Knowledge Management: A contingency perspective. Journal of Management Information Systems, 18(1), 23–55. Bertzsis, A. T. (2001). Dimensions of the Knowledge Management Process. IEEE Computer Society, 15294188/01, 432-441.
290
Berztiss, A. T. (2002). Capability Maturity for Knowledge Management. In Proceedings of the 13th International Workshop on Database and Expert Systems Applications (DEXA’02) Bierly, P., & Chakrabarti, A. (1996). Generic Knowledge Strategies in the U.S. Pharmaceutical Industry. Strategic Management Journal, 17(Winter Special Issue), 123-135. Binney, D. (2001). The knowledge management spectrum - understanding the KM landscape. Journal of Knowledge Management, 5(1), 33–42. doi:10.1108/13673270110384383 Birkinshaw, J., & Sheehan, T. (2002). Managing the Knowledge Life Cycle. MIT Sloan Management Review, 44(1), 75–83. Blomqvist, K., & Ståhle, P. (2000). Building organizational trust. Paper presented at the 16th IMP-conference. from http://www.impgroup.org/paper_view. php?viewPaper=37 Bock, G.-W., Sahbherwal, R., & Qian, Z. (2008). The effect of social context on the success of knowledge repository systems. IEEE Transactions on Engineering Management, 55(4), 536–551. doi:10.1109/TEM.2008.927824 Bontis, N. (2001). Assessing knowledge assets: A review of the models used to measure intellectual capital. International Journal of Management Reviews, 3(1), 41–60. doi:10.1111/1468-2370.00053 Bontis, N. (1998). Intellectual Capital: An Exploratory Study that Develops Measures and Models. Management Decision, 36, 63–76. doi:10.1108/00251749810204142 Bose, R. (2004). Knowledge management metrics. Industrial Management & Data Systems, 104(6), 457–468. doi:10.1108/02635570410543771 Bots, P., & De Bruijn, H. (2002). Effective Knowledge Management in Professional Organizations. In Proceedings of the 35th Hawaii International Conference on System Sciences. IEEE Computer Society Press.
Compilation of References
Boudreau, M., Gefen, D., & Straub, D. W. (2001). Validation of IS research: A state-of-the-art assessment. Management Information Systems Quarterly, 25(1), 1–24. doi:10.2307/3250956 Brady, M. K., & Cronin, J. (2001). Effects on Customer Service Perceptions and Outcome Behaviors. Journal of Service Research, 3(February), 241–251. Braganza, A., Hackney, R., & Tanudjojo, S. (2007). Organizational knowledge transfer through creation, mobilization and diffusion: a case analysis of InTouch within Schlumberger. Information Systems Journal. Bragge, J., & Merisalo-Rantanen, H. (2009). Engineering E-Collaboration Processes to Obtain Innovative End-User Feedback on Advanced Web-Based Information Systems. Journal of the Association for Information Systems, 10(3), 196–220. Bragge, J., Merisalo-Rantanen, H., Nurmi, A., & Tanner, L. (2007). A Repeatable E-Collaboration Process Based on ThinkLets for Multi-Organization Strategy Development. Group Decision and Negotiation, 16(2), 363–379. doi:10.1007/s10726-006-9055-5 Briggs, R. O., de Vreede, G. J., & Nunamaker, J. F. (2003). Collaboration Engineering with ThinkLets to Pursue Sustained Success with Group Support Systems. Journal of Management Information Systems, 19(4), 31–64. Briggs, R. O., & de Vreede, G. J. (2001). ThinkLets: Building Blocks for Concerted Collaboration, (Version 1.0), Tucson: GroupSystems.com. Bro, R. F. (1974). Interpretive Structural Modeling as Technology for Social Learning. Conference on Decision and Control, November 20-22. Phoenix, AZ: IEEE. Brockman, B. K., & Morgan, R. M. (2003). The role of existing knowledge in new product innovativeness and performance. Decision Sciences, 34(2), 385–420. doi:10.1111/1540-5915.02326
Brown, J. S., & Duguid, P. (2000). Balancing act: How to Capture Knowledge without Killing It. Harvard Business Review, 78(3). Brown, R. B., & Woodland, M. J. (1999). Managing knowledge wisely: A case study in organizational behavior. Journal of Applied Management Studies, 8(2), 175–198. Brown, S. A., Dennis, A. R., & Gant, D. B. (2006). Understanding the Factors Influencing the Value of Personto-Person Knowledge Sharing. 39th Hawaii International Conference on System Sciences, IEEE Computer Society. Bryman, A., & Bell, E. (2007). Business research methods. Oxford, UK: Oxford University Press. Bryne, B. M. (2001). Structural Equation Modeling with AMOS: Basic concepts, applications, and programming. Mahwah, NJ: Lawrence Erlbaum Associates. Bueno, E. (1998). El Capital Intangible como Clave Estratégica en la Competencia Actual. Boletín de Estudios Económicos, 53, 207–229. Cabrita, M. R., & Bontis, N. (2008). Intellectual Capital and Business Performance in the Portuguese Banking Industry. International Journal of Technology Management, 43(1-3), 212–237. doi:10.1504/IJTM.2008.019416 Capon, (1992). Profiles of Product Innovators among large U.S. Manufacturers. Management Science, 38, 157–169. Carlile, P., & Rebentisch, E. S. (2003). Into the Black Box: The Knowledge Transformation Cycle. Management Science, 49(9), 1180–1195. doi:10.1287/ mnsc.49.9.1180.16564 Carlsson, S. A., & Sawy, O. A. El, Eriksson, Inger and Raven, Arjan. (1996). Gaining Competitive Advantage Through Shared Knowledge Creation: In Search of a New Design Theory for Strategic Information Systems. In Proceedings of the Fourth European Conference on Information Systems. Lisbon, Portugal. (pp. 1067-1076).
Brooking, A. (1996). Intellectual Capital. Core Asset for the Third Millennium Enterprise. London: International Thomson Business Press.
291
Compilation of References
Carlucci, D., & Schiuma, G. (2007). Exploring Intellectual Capital Concept in Strategic Management Research. In Joia, L. A. (Ed.), Strategies for Information Technology and Intellectual Capital (pp. 10–28). Hershey, PA: Information Science Reference. Carneiro, A. (2000). How does knowledge management influences innovation and competitiveness? Journal of Knowledge Management, 4(2), 87–98. Carnevale, D. G., & Wechsler, B. (1992). Trust in the public sector. Administration & Society, 23, 471–494. doi:10.1177/009539979202300404 Carson, E., Ranzijn, R., Winefield, A., & Marsden, H. (2004). Intellectual Capital. Mapping Employee and Work Group Attributes. Journal of Intellectual Capital, 5(3), 443–463. doi:10.1108/14691930410550390 Carvalho, R. B., & Ferreira, M. A. T. (2001). Using information technology to support knowledge conversion processes. Information Research, 7(1). Retrieved May 10th, 2009 from http://informationr.net/ir/7-1/paper118.html Chan, I., & Chau, P. Y. K. (2005). Getting knowledge management right: lessons from failure. International Journal of Knowledge Management, 1(3), 40–54. Chan, Y. E., Huff, S. L., & Copeland, D. G. (1998). Assessing realized information systems strategy. Strategic Information Systems, 6, 273–298. doi:10.1016/S09638687(97)00005-X Chen, F. F., West, S. G., & Sousa, K. H. (2006). A Comparison of Bifactor and Second-Order Models of Quality of Life. Multivariate Behavioral Research, 41(2), 189–225. doi:10.1207/s15327906mbr4102_5 Chen, F., Romano, N., & Nunamaker, J. F. (2006). A Collaborative Project Manamgent Approach and a Framework for Its Supporting Systems. Journal of International Technology and Information Management, 15(2), 1–16. Chen, A.-P., & Chen, M.-Y. (2005). A review of survey research in knowledge management performance measurement: 1995-2004. Paper presented at the I-KNOW 05.
292
Chilton, M. A., & Bloodgood, J. M. (2008). The dimensions of tacit & explicit knowledge: A description and measure. International Journal of Knowledge Management, 4(2), 75–91. Chilton, M. A., & Bloodgood, J. M. (2007). The dimensions of tacit & explicit knowledge: A description and measure. In Proceedings of the 40th Annual Hawaii International Conference on System Sciences (HICSS 2007), Hawaii, USA. Chin, W. W. (2005). PLS Graph. Soft Modeling, Inc. Chin, W. W., & Gopal, A. (1995). Adoption intention in GSS: Relative importance of beliefs. The Data Base for Advances in Information Systems, 26(2&3), 42–63. Chin, W. W. (1998). The partial least squares approach to structural equation modeling. In Marcoulides, G. A. (Ed.), Modern Methods for Business Research (pp. 295–336). Chin, W. W., & Newsted, P. R. (1999). Structural equation modeling analysis with small samples using Partial Least Squares. In Marcoulides, G. A. (Ed.), Modern Methods for Business Research (pp. 307–341). Choi, B., & Lee, H. (2002). An empirical investigation of KM styles and their effect on corporate performance. Information & Management, 40(5), 403–417. doi:10.1016/ S0378-7206(02)00060-5 Chong, S., & Choi, Y. S. (2005). Critical Factors in the Successful Implementation of Knowledge Management. Journal of Knowledge Management Practice – In the Knowledge Garden, 6. Choo, C. W. (1998). The Knowing Organization. New York: Oxford University Press. Choo, C. W., & Bontis, N. (2002). Strategic management of intellectual capital and organizational knowledge. New York: Oxford University Press. Chourides, P., Longbottom, D., & Murphy, W. (2003). Excellence in Knowledge Management: An Empirical Study to Identify Critical Factors and Performance Measures. Measuring Business Excellence, 7(2), 29–45. doi:10.1108/13683040310477977
Compilation of References
Chua, A., & Lam, W. (2005). Why KM projects fail: a multi-case analysis. Journal of Knowledge Management, 9(3), 6–17.
Comrey, A. L., & Lee, H. B. (1992). A First Course in Factor Analysis (2nd ed.). Hillsdale, NJ: Lawrence Erlbaum Associates.
Churchman, C. W. (1979). The Systems Approach (revised and updated). New York: Dell Publishing.
Conradi, R., & Dyba, T. (2001). An Empirical Study on the Utility of Formal Routines to Transfer Knowledge and Experience. ESEC/FSE. Vienna: ACM.
CIC (2003). Modelo Intellectus: Medición y Gestión del Capital Intelectual (Serie Documentos Intellectus No. 5). Madrid: Centro de Investigación sobre la Sociedad del Conocimiento (CIC). Clark, K. B., & Wheelwright, S. C. (1995). The product development challenge. Boston: Harvard Business Review. Clawson, V. K., Bostrom, R. P., & Anson, R. (1993). The Role of the Facilitator in Computer-Supported Meetings. Small Group Research, 24(4), 547–565. doi:10.1177/1046496493244007 Coff, R. W. (1997). Human Assets and Management Dilemmas: Coping with Hazards on the Road to ResourceBased Theory. Academy of Management Review, 22(2), 374–402. doi:10.2307/259327 Cohen, D. (1998). Toward a knowledge context: Report on the first annual U.C. Berkeley forum on knowledge and the firm. California Management Review, 40(3), 22–40. Cohen, D., & Prusak, L. (2001). Good Company. How Social Capital Makes Organizations Work. Harvard Business School Press. Comer, J. M., Machleit, K. A., & Lagace, R. R. (1989). Psychometric assessment of a reduced version of INDSALES. Journal of Business Research, 18(4), 291–302. doi:10.1016/0148-2963(89)90023-4 Commission, 911 (2004). The 9/11 Commission Report: Final Reort of the National Commission on Terrorist Attacks Upon the United States. New York: W.W. Norton & Company Ltd. Commission, 911 (2005). Final Report of the 9/11 Public Discourse Project, December 5, 2005. Retrieved January 12, 2006, from http://www.9-11pdp.org
Cook, J., & Wall, T. (1980). New work attitude measures of trust, organizational commitment and personal need non-fulfillment. Journal of Occupational Psychology, 53, 39–52. Cook, S. D. N., & Brown, J. S. (1999). Bridging Epistemologies: The Generative Dance Between Organizational Knowledge and Organizational Knowing. Organization Science, 10(4), 381–400. doi:10.1287/orsc.10.4.381 Cooper, R. G. (1984). The impact of new product strategies: what distinguishes top performers? Journal of Product Innovation Management, 1(2), 151–164. Cooper, L. P. (2006). An Evolutionary Model for KM Success. In Proceedings of the 39th Hawaii International Conference on System Sciences. IEEE Computer Society Press. Corso, M., Martini, A., Paolucci, E., & Pellegrini, L. (2003). Knowledge management configurations in Italian small-to-medium enterprises. Integrated Manufacturing Systems, 14(1), 46–56. doi:10.1108/09576060310453344 Crnkovic, J., Belardo, S., & Asoh, D. (2004). The Knowledge Management Index as a micro level organizational diagnostic tool: Analysis and illustrations with data from a pilot study. Paper presented at the 9th World Multiconference on Systemics, Cybernetics, and Informatics (SCI 2004), Orlando, Florida. Cross, R., & Baird, L. (2000). Technology Is Not Enough: Improving Performance by Building Organizational Memory. Sloan Management Review, 41(3), 41–54. Culbert, S. A., & McDonough, J. J. (1986). The politics of trust and organizational empowerment. Public Administration Quaterly, 10, 171–188.
293
Compilation of References
Dalkir, K. (2005). Knowledge management in theory and practice. Boston: Elsevier. Damodaran, L., & Olphert, W. (2000). Barriers and Facilitators to the Use of Knowledge Management Systems. Behaviour & Information Technology, 19(6), 405–413. doi:10.1080/014492900750052660 Daniel, D. R. (1961). Management Information Crisis. Harvard Business Review, 39(5), 111–112. Darroch, J. (2003). Developing a measure of knowledge management behaviors and practices. Journal of Knowledge Management, 7(5), 41–54. doi:10.1108/13673270310505377 Davenport, T. H., DeLong, D. W., & Beers, M. C. (1998). Successful Knowledge Management Projects. Sloan Management Review, 39(2), 43–57. Davenport, T. H., & Prusak, L. (1998). Working Knowledge. Boston, MA: Harvard Business School Press. Davenport, T. H., Thomas, R. J., & Cantrell, S. (2002). The mysterious art and science of knowledge-worker performance. Sloan Management Review, 44, 23–30. Davenport, T. H., & Grover, V. (2001). Special issue: Knowledge management. Journal of Management Information Systems, 18(1), 3–4. Davis, F. D. (1989). Perceived Usefulness, Perceived Ease of Use, and User Acceptance of Information Technology. Management Information Systems Quarterly, 13, 319–340. doi:10.2307/249008 Day, G. S. (1999). Market Driven Strategy. New York: The Free Press. Day, G. S., & Wensley, R. (1988, April). Assessing advantage: a framework for diagnosing competitive superiority. Journal of Marketing, 52, 1–20. De Furia, G. L. (1997). Facilitator’s guide to the interpersonal trust surveys. Pfeiffer & Co. De Furia, G. L. (1996). A Behavioral Model of Interpersonal Trust. Unpublished Doctoral dissertation, St. John’s University, Springfield, LA.
294
De Long, D. W., & Fahey, L. (2000). Diagnosing cultural barriers to knowledge management. The Academy of Management Executive. Ada, 14(4), 113–127. De Long, D. W., & Fahey, L. (2000). Diagnosing cultural barriers to knowledge management. The Academy of Management Executive, 14(4), 113–127. De Tienne, K. B., Dyer, G., Hoopes, C., & Harris, S. (2004). Toward a Model of Effective Knowledge Management and Directions for Future Research: Culture, Leadership and CKO’s. Journal of Leadership & Organizational Studies, 10(4), 26–43. de Vreede, G.-J., Briggs, R. O., & Massey, A. P. (2009). Collaboration Engineering: Foundations and Opportunities: Editorial to the Special Issue on the Journal of the Association of Information Systems. Journal of the Association for Information Systems, 10(3), 121–137. de Vreede, G.-J., Fruehling, A., & Chakrapani, A. (2005). A Repeatable Collaboration Process for Usability Testing. In Proceedings of the 38th Hawaii International Conference on System Sciences, Retrieved April 17, 2009 from http://www2.computer.org/plugins/dl/pdf/proceedings/ hicss/2005 /2268/01/22680046.pdf Delahaye, D. (2003). Knowledge Management in a SME. International Journal of Organisational Behaviour, 9(3), 604–614. Delmonte, A. J., & Aronson, J. E. (2004). The Relationship Between Social Interaction And Knowledge Management System Success. Journal of Knowledge Management Practice, 5. Delone, W. H., & McLean, E. R. (1992). Information Systems Success: The Quest for the Dependent Variable. Information Systems Research, 3, 60–95. doi:10.1287/ isre.3.1.60 DeLone, W. H., & McLean, E. R. (2003). The DeLone and McLean Model of Information Systems Success: A Ten-Year Update. Journal of Management Information Systems, 19(4), 9–30.
Compilation of References
den Hengst, M., & Adkins, M. (2007). Which collaboration patterns are most challenging: A global survey of facilitators. In Proceedings of the 40th Annual Hawaii International Conference on System Sciences. Retrieved May 4, 2009 from csdl.computer.org/comp/proceedings/ hicss/2007 /2755/00/27550017b.pdf den Hengst, M., Dean, D., Kolfchoten, G., & Chakrapani, A. (2006). Assessing the Quality of Collaborative Processes. In Proceedings of the 39th Hawaii International Conference on System Sciences, Retrieved April 17, 2009 from http://www2.computer.org/plugins/dl/pdf/ proceedings/hicss/2006/2507/01/250710016b.pdf den Hengst, M., van de Kar, E., & Appelman, J. (2004). Designing mobile information services: user requirements elicitation with GSS design and application of a repeatable process. In Proceedings of the 37th Annual Hawaii International Conference on System Sciences, Retrieved April 17, 2009 from http://www2.computer.org/plugins/ dl/pdf/proceedings/hicss/2004/2056/01/205610018c.pdf Denning, S. (1998). What is knowledge management? from http://www.stevedenning.com/Find_what_is_km.html
DeVellis, R. F. (1991). Scale development: Theory and applications. Newbury Park, CA: Sage. Dhaliwal, J., & Benbasat, I. (1996). The use and effects of knowledge-based system explanations: Theoretical foundations and a framework for empirical evaluation. Information Systems Research, 7(3), 342–362. doi:10.1287/ isre.7.3.342 Diamantopoulos, A., & Winklhofer, H. M. (2001, May). Index construction with formative indicators: An alternative to scale development. JMR, Journal of Marketing Research, xxxxvlll, 269–277. doi:10.1509/ jmkr.38.2.269.18845 Dillman, D. A. (2000). Mail and Internet Surveys: The tailored design method (2nd ed.). New York: John Wiley & Sons, Inc. Dinur, A. (2002). Intrafirm knowledge transfers in multinational corporations: Considering critical context. Temple University. Doll, W. J., & Torkzadeh, G. (1988). The Measurement of End-User Computing Satisfaction. Management Information Systems Quarterly, 12, 259–275. doi:10.2307/248851
Dennis, A. R., & Vessey, I. (2005). Three knowledge management strategies: knowledge hierarchies, knowledge markets, and knowledge communities. MIS Quaterly Executive, 4(4), 399–412.
Dooley, K. J., Corman, S. R., & McPhee, R. D. (2002). A Knowledge Directory for Identifying Experts and Areas of Expertise. Human Systems Management, 21, 217–228.
Dennis, A., & Wixom, B. (2001). Investigating the moderators of the group support system use with metaanalysis. Journal of Management Information Systems, 18(3), 235–257.
Dreyfus, H., & Dreyfus, S. (1997). Why Computers May Never Think Like People. In Ruggles, R. (Ed.), Knowledge Management Tools (pp. 31–50). Boston: Butterworth-Heinemann.
Dennis, A. R., Tyran, C. K., Vogel, D., & Nunamaker, J. F. (1997). Group Support Systems for Strategic Planning. Journal of Management Information Systems, 14(1), 155–184.
Dudezert, A. (2006). Approaches and Methods for Valuing Knowledge Management Performance. In Boughzala, I., & Ermine, J.-L. (Eds.), Trends In Enterprise Knowledge Management. International Scientific and Technical Encyclopedia. doi:10.1002/9780470612132.ch6
DeSanctis, G., & Gallupe, R. B. (1987). A Foundation for the Study of Group Decision Support Systems. Management Science, 33(5), 589–609. doi:10.1287/mnsc.33.5.589 Desouza, K. C., & Raider, J. J. (2006). Cutting corners: CKOs and knowledge management. Business Process Management Journal, 12(2).
Dupuy, F. (2000). Why Is It So Difficult to Reform Public Administration? Government of the Future. Paris: OECD Publications.
295
Compilation of References
Dutta, S. (1997). Strategies for Implementing Knowledgebased Systems. IEEE Transactions on Engineering Management, 44(1), 79–90. doi:10.1109/17.552810 Earl, M. (2001). Knowledge management strategies: Toward a taxonomy. Journal of Management Information Systems, 18(1), 215–234. Eden, C., & Ackermann, F. (2001). Group Decision and Negotiation in Strategy Making. Group Decision and Negotiation, 10(2), 119–140. doi:10.1023/A:1008710816126
Engestrom, Y., Engestrom, R., & Karkkainen, M. (1995). Polycontextuality and boundary crossing in expert cognition: Learning and problem solving in complex work activities. Learning and Instruction, 5(4), 319–336. doi:10.1016/0959-4752(95)00021-6 Ericsson, F., & Avdic, A. (2003). Knowledge Management Systems Acceptance. In Coakes, E. (Ed.), Knowledge Management: Current Issues and Challenges (pp. 39–51). Hershey, PA: IRM Press.
Edmondson, A. (1999). Psychological safety and learning behavior in work teams. Administrative Science Quarterly, 44(2), 350–384. doi:10.2307/2666999
Fabrigar, L. R., Wegener, D. T., MacCallum, R. C., & Strahan, E. J. (1999). Evaluating the Use of Exploratory Factor Analysis in Psychological Research. Psychological Methods, 4(3), 272–299. doi:10.1037/1082-989X.4.3.272
Edvinsson, L., & Malone, M. (1997). Intellectual Capital. Realizing your Company’s True Value by Findings its Hidden Brainpower. New York: Harper Collins Publishers, Inc.
Fahey, L., & Prusak, L. (1998). The Eleven Deadliest Sins of Knowledge Management. California Management Review, 40(3), 265–276.
Edwards, J. R., & Bagozzi, R. P. (2000). On the nature and direction of the relationship between constructs and measures. Psychological Methods, 5, 155–174. doi:10.1037/1082-989X.5.2.155 Edwards, J. S., & Kidd, J. B. (2003, February). Knowledge Management Sans Frontiers. The Journal of Operational Research Society. Special Issue on Knowledge Management and Intellectual Capital, 54(2), 130–139. Ehms, K., & Langen, M. (2002). Holistic Development of Knowledge Management with KMMM. Retrieved 5 December, 2004, from www.knowledgeboard.com/cgibin/library.cgi?action=detail&id=5180 Eisenhardt, K. M., & Tabrizi, B. N. (1995). Accelerating adaptive processes: Product innovation in the global computer industry. Administrative Science Quarterly, 40(1), 84–110. doi:10.2307/2393701 Elliott, S., & O’Dell, C. (1999). Sharing knowledge & best practices: The hows and whys of tapping your organization’s hidden reservoirs of knowledge. Health Forum Journal, 42(3), 34–37.
296
Falk, R. F., & Miller, N. B. (1992). A premier for Soft Modeling. Akron, OH: The University of Akron. Fernandes, A. A. (2000). Combining inductive and deductive inference in knowledge management tasks. In Eleventh International Workshop on Database and Expert Systems Applications, pp. 1109-1 114, IEEE Computer Society Press, 2000. Fink, K., & Ploder, C. (2008). Integration Concept for Knowledge Processes, Methods & Software for SMEs. In Gupta, J., Sharma, S., & Rashid, M. (Eds.), Encyclopedia of Enterprise Systems. Hershey, PA: IGI Global. Fink, K., & Ploder, C. (2007). Knowledge Process Modeling in SME and Cost-Efficient Software Support: Theoretical Framework and Empirical Studies. In Khosrow-Pour, M. (Ed.), Managing Worldwide Operations and Communications with Information Technology. Hershey, PA: IGI Global. Fink, K. (2009). Knowledge Measurement Barriers. Paper presented at the 15th American Conference on Information Systems, San Francisco.
Compilation of References
Fink, K., & Ploder, C. (2006). The Impact of Knowledge Process Modeling on Small and Medium-sized Enterprises. In K. Tochtermann & H. Maurer (Eds.), Proceedings of I-KNOW ‘06: 6th International Conference on Knowledge Management (pp. 47-51). Graz: J.UCS. Fjermestad, J., & Hiltz, S. R. (1999). An Assessment of Group Support Systems Experimental Research: Methodology and Results. Journal of Management Information Systems, 15(3), 7–150. Fjermestad, J., & Hiltz, S. R. (2000). Group Support Systems: A Descriptive Evaluation of Case and Field Studies. Journal of Management Information Systems, 17(3), 112–157.
GAO. (2005). High-Risk Series, An Update, January, 2005, Document # GAO-05-207. Retrieved February 2, 2005, from http://www.gao.gov/sp Garud, R. (1997). On the distinction between know-how, know-what and know-why. In Huff, A., & Walsh, J. (Eds.), Advances in Strategic Management (pp. 81–101). Greenwich, CT: JAI Press. Gasson, S. (2005). The dynamics of sensemaking, knowledge, and expertise in collaborative, boundary-spanning design. Journal of Computer-Mediated Communication, 10(4).
Focus, H. R. (2007). Why culture can mean life or death for your organization. HRFocus, 84, 9.
Gebhardt, G. F., Carpenter, G. S., & Sherry, J. F. Jr. (2006). Creating a market orientation: a longitudinal, multifirm, grounded analysis of cultural transformation. Journal of Marketing, 70, 37–55.
Fornell, C., & Bookstein, F. (1982). Two structural equation models: LISREL and PLS applied to consumer exit-voice theory. JMR, Journal of Marketing Research, 19, 440–452. doi:10.2307/3151718
Gefen, D., Straub, D. W., & Boudreau, M.-C. (2000). Structural equation modeling and regression: Guidelines for research practice. Communications of the Association for Information Systems, 4(Article 7), 1-77.
Fornell, C., & Larcker, D. (1981). Evaluating structural equation models with unobservable variables and measurement errors. JMR, Journal of Marketing Research, 18, 39–50. doi:10.2307/3151312
George, D., & Mallery, P. (2005). SPSS for Windows Step by Step: A Simple Guide and Reference – 12.0 Update, 5th Edition.
Fornell, C., Lorange, P., & Roos, J. (1990, Oct.). The cooperative venture formation process: A latent variable structural modeling approach. Management Science, 36, 1246–1255. doi:10.1287/mnsc.36.10.1246
Gerbing, D. W., & Anderson, J. C. (1988). An updated paradigm for scale development incorporating unidimensionality and its assessment. JMR, Journal of Marketing Research, 25, 186–192. doi:10.2307/3172650
Fornell, C. R. (1982). A Second Generation of Multivariate Analysis.: Vol. I. Methods. New York: Praeger.
Gerjuoy, E. (1993). Uncertainty Principle. In Parker, S. (Ed.), Encyclopedia of Physics (pp. 1490–1491). New York: McGraw-Hill.
Freeze, R., & Kulkarni, U. (2007). Knowledge Management Capability: Defining Knowledge Assets. Journal of Knowledge Management, 11(6). doi:10.1108/13673270710832190
Germain, R., Droge, C., & Daugherty, P. J. (1994). The Effect of Just-in-time selling on organizational structure: an empirical investigation. JMR, Journal of Marketing Research, 31, 471–483.
Galford, R., & Drapeau, A. S. (2003). The enemies of trust. Harvard Business Review.
Geus, A. (1997). The Living Company. Boston: Harvard Business School Press.
GAO. (2004). GAO Strategic Plan for 2004-2009, Document # GAO-04-5334SP. Retrieved December 20, 2005, from http://www.gao.gov/sp.html
297
Compilation of References
Ginsberg, M., & Kambil, A. (1999). Annotate: A Webbased Knowledge Management Support System for Document Collections. In Proceedings of the 32nd Hawaii International Conference on System Sciences, IEEE Computer Society Press. Glazer, R. (1991). Marketing in an Information-Intensive Environment: Strategic Implications of Knowledge as an Asset. Journal of Marketing, 55, 1–19. Goh, S. C. (2002). Managing Effective Knowledge Transfer: An Integrative Framework and Some Practice Implications. Journal of Knowledge Management, 6(1), 23–30. doi:10.1108/13673270210417664 Gold, A. H., Malhotra, A., & Segars, A. H. (2001). Knowledge Management: An Organizational Capabilities Perspective. Journal of Management Information Systems, 18(1), 185–214. Goodhue, D. L., & Thompson, R. L. (1995). TaskTechnology Fit and Individual Performance. Management Information Systems Quarterly, 19(2), 213–236. doi:10.2307/249689 Gopal, A., Bostrom, R. P., & Chin, W. W. (1992). Applying adaptive structuration theory to investigate the process of group support systems use. Journal of Management Information Systems, 9(3), 45–69. Grant, G. M. (1996). Prospering in dynamically-competitive environments: organizational capability as knowledge integration. Organization Science, 7(4), 375–387. Grant, R. M. (1996). Toward a knowledge-based theory of the firm. Strategic Management Journal, 17(Special Issue), 109–122. Grant, R. A., & Higgins, C. A. (1991). The impact of computerized performance monitoring on service work: Testing a casual model. Information Systems Research, 2(2), 116–142. doi:10.1287/isre.2.2.116 Grant, R. M. (1991). The resource-based theory of competitive advantage: Implication for strategy. California Management Review, 22, 114–135.
298
Green, H. (2000). Information Theory and Quantum Physics. Berlin: Springer Publisher. Greenhalgh, T., Robert, G., MacFarlane, F., Bate, P., & Kyriakidou, O. (2004). Diffusion of innovations in service organizations: Systematic review and recommendations. The Milbank Quarterly, 82(4), 581–629. doi:10.1111/j.0887-378X.2004.00325.x Gribbin, J. (1999). Q is for Quantum. New York: A Touchstone Book. Griffin, K. (1967). The contribution of studies of source credibility to a theory of interpersonal trust in the communication process. Psychological Bulletin, 68, 104–120. doi:10.1037/h0024833 Grover, V., & Davenport, T. H. (2001). General perspectives on knowledge management: fostering a research agenda. Journal of Management Information Systems, 18(1), 5–17. Gruber, M. (2000). Der Wandel von Erfolgsfaktoren Mittelständischer Unternehmen. Wiesbaden: DUV. Guo, Z., & Sheffield, J. (2006). A paradigmatic and methodological examination of KM research: 2000 to 2004. In Proceedings of the 39th Annual Hawaii International Conference on System Sciences (HICSS 2006), Hawaii, USA. Gupta, M. (1992). Intelligence, Uncertainty and Information. In Ayyub, B., Gupta, M., & Kanal, L. (Eds.), Analysis and Management of Uncertainty. Amsterdam: North-Holland. Gurteen, D. (1998). Knowledge, Creativity and Innovation. Journal of Knowledge Management, 2(1), 5–13. doi:10.1108/13673279810800744 Gustafsson, J., & Balke, G. (1993). General and Specific Abilities as Predictors of School Achievement. Multivariate Behavioral Research, 28, 407–434. doi:10.1207/ s15327906mbr2804_2 Hackett, B. (2000). Beyond Knowledge Management: New ways to work and learn.: The Conference Board.
Compilation of References
Hackman, J. R., & Oldham, G. R. (1980). Work redesign. Reading, MA: Addison-Wesley. Hair, J. F. A., Rolph, E., Tathan, R. L., & Black, W. C. (1998). Multivariate Data Analysis. Upper Saddle River, NJ: Prentice Hall. Hair, J. F. Jr, Anderson, R. E., Tatham, R. L., & Black, W. C. (2004). Análisis Multivariante. Madrid: PearsonPrentice Hall. Hall, R. (1992). The Strategic Analysis of Intangible Resources. Strategic Management Journal, 13, 135–144. doi:10.1002/smj.4250130205 Hall, R. (1993). A Framewok Linking Intangible Resources and Capabilities to Sustainable Competitive Advantage. Strategic Management Journal, 14, 607–618. doi:10.1002/ smj.4250140804 Hamel, G., & Prahalad, C. K. (1995). Competindo pelo futuro. Rio de Janeiro, Brazil: Campus. Hansen, M. T., Nohria, N., & Tierney, T. (1999). What’s Your Strategy For Managing Knowledge? Harvard Business Review, (March-April): 106–116. Hansen, M. T., & Nohria, N. (2004). How to Build Collaborative Advantage? MIT Sloan Management Review, 46(1), 22–30. Hansen, M., Nohira, N., & Tierney, T. (2001). In In, H. B. R. (Ed.), What’s your strategy for managing knowledge? (pp. 61–86). Boston: Harvard Business Review on Organizational Learning. Harigopal, U., & Satyadas, A. (2001). Cognizant Enterprise Maturity Model (CEMM). IEEE Transactions on Systems, Man and Cybernetics. Part C, Applications and Reviews, 31(4), 449–459. doi:10.1109/5326.983928
Hatami, A., Galliers, R. D., & Huang, J. (2003). Exploring the Impacts of Knowledge (Re) Use and Organizational Memory on the Effectiveness of Strategic Decisions: A Longitudinal Case Study. 36th Hawaii International Conference on System Sciences, IEEE Computer Society. Hatcher, L. (1994). A step by step approach to using SAS for factor analysis and structural equation modeling. Cary, N.C: SAS. Henderson, J. C., & Venkatraman, H. (1999). Strategic alignment: Leveraging information technology for transforming organisations. IBM Systems Journal, 38(2/3), 472. Henderson, J., & Venkatraman, N. (1993). Strategic Alignment: Leveraging Information Technology for Transforming Organizations. IBM Systems Journal, 32(2), 4–16. Hendriks, P., & Vriens, D. (1999). Knowledge-based systems and knowledge management: friends or foes? Information & Management, 35(2), 113–125. doi:10.1016/ S0378-7206(98)00080-9 Henselewski, M., Smolnik, S., & Riempp, G. (2006). Evaluation of Knowledge Management Technologies for the Support of Technology Forecasting. In Proceedings of the 39th Hawaii International Conference on System Sciences. Hawaii, USA: IEEE Computer Society Press. Herschel, R., & Yermish, I. (2008). Knowledge Transfer: Revisiting Video. International Journal of Knowledge Management, 4(2). Hibbard, J. (1997, Oct.). Knowing what we know. InformationWeek, 653, 46–54.
Hariharan, A. (2005). Implementing seven KM enablers at Bharti. Knowledge Management Review, 8(3), 8–9.
Hinds, P., & Pfeffer, J. (2003). Why organizations don’t know what they know: cognitive and motivational factors affecting the transfer of expertise. In Ackerman, M., Pipek, V., & Wulf, V. (Eds.), Sharing expertise: beyound knowledge management (pp. 3–26). Cambridge, MA: MIT Press.
Hart, W. L., & Malone, D. (1974). Goal Setting for a State Environmental Agency. Conference on Decision and Control, November 20-22. Phoenix, AZ: IEEE.
Hoffer, J. A., Prescott, M. B., & McFadden, R. R. (2005). Modern Database Management (7th ed.). Upper Saddle River, NJ: Pearson Prentice Hall.
299
Compilation of References
Holsapple, C. W., & Joshi, K. D. (2000). An Investigation of Factors that Influence the Management of Knowledge in Organizations. The Journal of Strategic Information Systems, 9, 235–261. doi:10.1016/S0963-8687(00)00046-9 Holsapple, C. W., & Whinston, A. B. (1996). Decision Support Systems – A Knowledge-Based Approach. Minneapolis, St. Paul: West Publishing Company.
Iivari, J. (2005). An empirical test of the DeLone-McLean Model of information system success. The Data Base for Advances in Information Systems, 36(2), 8–26. Im, S., & Workman, J. P. (2004). Market Orientation, Creativity, and New Product Performance in High-Technology Firms. Journal of Marketing, 68, 114–132.
Holsapple, C., & Wu, J. (2008). Does Knowledge Management Pay Off? Paper presented at the HICSS-41.
Im, S. K., Grover, V., & Sharma, S. (1998). The use of structural equation modeling in research. (Report). Columbia: University of South Carolina.
Hu, L., & Bentler, P. M. (1999). Cutoff criteria for fit indexes in covariance structure analysis: Conventional criteria versus new alternatives. Structural Equation Modeling, 6(1), 1–55. doi:10.1080/10705519909540118
International Standards Organization. (2006). ISO/IEC 16085:2006: Systems and software engineering - Life cycle processes - Risk management. Geneva: International Standards Organization.
Huber, G. P. (1984). Issues in the Design of Group Decision Support Sytems. Management Information Systems Quarterly, 8(3), 195–204. doi:10.2307/248666
Ionescu, L. M., Burstein, F., & Zyngier, S. (2006). Knowledge Management Strategies in Australia (No. 2006/01). Caulfield East: Knowledge Management Research Program, Monash University. Document Number.
Huber, G. P., Davenport, T. H., & King, D. (1998). Some Perspectives on Organizational Memory, Unpublished Working Paper for the Task Force on Organizational Memory, In F. Burstein, G. Huber, M. Mandviwalla, J. Morrison, and L. Olfman, (eds.), Presented at the 31st Annual Hawaii International Conference on System Sciences. Hubert, C. (2002). Knowledge Management: It’s About Engaging Your Culture, Not Changing It Retrieved July, 17, 2004, from www.apqc.org/portal/apqc/site/ content?docid=107492 Hulland, J. (1999). Use of partial least squares (PLS) in strategic management research: A review of four recent studies. Strategic Management Journal, 20, 195–204. doi:10.1002/(SICI)1097-0266(199902)20:2<195::AIDSMJ13>3.0.CO;2-7 Hult, T. G., & Ketchen, D. J. (2001). Does Market Orientation Matter? A Test of the Relationship Between Positional Advantage and Performance. Strategic Management Journal, 22(September), 899–906. Hurley, R. F., & Hult, G. T. M. (1998). Innovation, Marketing Orientation and Organizational Learning: An Integration and Empirical Examination. Journal of Marketing, 62, 42–54.
300
IT Governance Institute. (2001). Board Briefing on IT Governance. Rolling Meadows, Il. Information Systems Audit and Control Foundation. Itami, H., & Roehl, T. (1987). Mobilizing Invisible Assets. Cambridge, MA: Harvard University Press. Janz, B. D., & Prasarnphanich, P. (2003). Understanding the antecedents of effective knowledge management: The importance of a knowledge-centered culture. Decision Sciences, 34(2), 351–384. doi:10.1111/1540-5915.02328 Jarvis, C. B., Mackenzie, S. B., & Podsakoff, P. M. (2003, September). A critical review of construct indicators and measurement model misspecification in marketing and consumer research. The Journal of Consumer Research, 30, 199–218. doi:10.1086/376806 Jaworski, B. J., & Kohli, A. K. (1993). Market Orientation: Antecedents and Consequences. Journal of Marketing, 57, 53–70. Jennex, M. E. (2005). What is Knowledge Management? International Journal of Knowledge Management, 1(4), 1–5.
Compilation of References
Jennex, M. E., & Olfman, L. (2005). Assessing Knowledge Management Success. International Journal of Knowledge Management, 1(2), 33–49. Jennex, M. E., & Olfman, L. (2006). A Model of Knowledge Management Success. International Journal of Knowledge Management, 2(3), 51–68. Jennex, M. E. (2006). Culture, Context, and Knowledge Management. International Journal of Knowledge Management, 2(2), 1–5. Jennex, M. E., & Croasdell, D. (2005). Knowledge Management: Are We A Discipline? International Journal of Knowledge Management, 1(1), 1–5. Jennex, M. E., & Olfman, L. (2005). Assessing knowledge management success. International Journal of Knowledge Management, 1(2), 33–49. Jennex, M. E., & Olfman, L. (2006). A Model of Knowledge Management Success. International Journal of Knowledge Management, 2(3), 51–68. Jennex, M. E. (2005). Knowledge Management Systems. International Journal of Knowledge Management, 1(2), 1–4. Jennex (Ed.). (2007). Knowledge Management in Modern Organizations. Hershey, PA: Idea Group Publishing. Jennex, M. E. (2000). Using an Intranet to Manage Knowledge for a Virtual Project Team, Internet-Based Organizational Memory and Knowledge Management (Schwartz, D. G., Divitini, M., & Brasethvik, T., Eds.). Idea Group Publishing. Jennex, M. (2005). The issue of system use in knowledge management systems. In Proceedings of the 38th Hawaii International Conference on System Sciences (HICSS). Jennex, M. E. (2007, November 9). Knowledge Management in Support of Education. First International Conference on Education Reform, Khon Kaen, Thailand. Jennex, M. E., & Olfman, L. (2000). Development Recommendations for Knowledge Management/ Organizational Memory Systems. In Proceedings of the Information Systems Development Conference.
Jennex, M. E., & Olfman, L. (2002). Organizational Memory/Knowledge Effects on Productivity, A Longitudinal Study. In Proceedings of the 35th Annual Hawaii International Conference on System Sciences, IEEE Computer Society. Jennex, M. E., & Olfman, L. (2003). A Knowledge Management Success Model: An Extension of DeLone and McLean’s IS Success Model. Paper presented at the Ninth Americas Conference on Information Systems. Jennex, M. E., & Olfman, L. (2004). Accessing Knowledge Management Success/Effectiveness Models. In Proceedings of the 37th Hawaii International Conference on System Sciences. HICSS37. IEEE Computer Society. Jennex, M. E., & Olfman, L. Pituma, P., and Yong-Tae, P., (1998). An Organizational Memory Information Systems Success Model: An Extension of DeLone and McLean’s I/S Success Model. In Proceedings of the 31st Annual Hawaii International Conference on System Sciences, IEEE Computer Society. Jennex, M. E., Croasdell, D., & Smolnik, S. (2008). Towards Measuring Knowledge Management Success. In Proceedings of the 41st Hawaii International Conference on System Sciences. IEEE Computer Society. Jennex, M. E., Olfman, L., & Addo, T. B. A. (2003). The Need for an Organizational Knowledge Management Strategy, 36th Hawaii International Conference on System Sciences, HICSS36, IEEE Computer Society. Jennex, M. E., Smolnik, S., & Croasdell, D. (2007). Towards Defining Knowledge Management Success. In Proceedings of the 40th Hawaii International Conference on System Sciences. Retrieved April 6, 2009 from http://www2.computer.org/plugins/dl/pdf/proceedings/ hicss/2007/2755/00 /27550193c.pdf Jennex, M. E., Smolnik, S., & Croasdell, D. T. (2007). Towards defining knowledge management success. Proceedings of the 40th Annual Hawaii International Conference on System Sciences (HICSS 2007), Hawaii, USA.
301
Compilation of References
Jennex, M., & Zyngier, S. (2008). Security as a Contributor to Knowledge Management Success. Information Systems Frontiers: A Journal of Research and Innovation, 9(5), 493–504. Johnson, B., Lorenz, E., & Lundvall, B. A. (2002). Why all this fuss about codified and tacit knowledge? Industrial and Corporate Change, 11(2), 245–262. doi:10.1093/ icc/11.2.245 Jöreskog, K., & Sörbom, D. (1998). Lisrel 8: User’s reference guide. Chicago: SSI. Kaen, F. R. (1995). Corporate finance: concepts and policies. Cambridge, MA: Blackwell Business. Kankanhalli, A., & Tan, B. C. Y. (2005). Knowledge management metrics: A review and directions for future research. International Journal of Knowledge Management, 1(2), 20–32. Kankanhalli, A., Tan, B., & Kwok-Kee, W. (2005). Contributing Knowledge to Electronic Knowledge Repositories – An Empirical Investigation. Management Information Systems Quarterly, 29(1), 113–143. Kanungo, S., & Bhatnagar, V. (2001). Beyond generic models for information system quality: The use of interpretive structural modeling. Systems Research and Behavioral Sciences. Kaplan, R. S., & Norton, D. P. (2001). Transforming the Balanced Scorecard from Performance Measurement to Strategic Management: Part 1. Accounting Horizons, 15(1), 87–104. Kaplan, R., & Norton, D. (1992). The Balanced Scorecard – Measures that drive Performance. Harvard Business Review, 70, 71–79. Kavanagh, M., & Thite, M. (2009). Human Resource Information Systems. Los Angeles: Sage P. Kelleher, D., & Courtney, N. (2003). PD 7502 Guide to Measurements in Knowledge Management. London: British Standards Institution.
302
Keskin, H. (2005). The relationships between explicit and tacit oriented KM strategy, and firm performance. Journal of American Academy of Business, 7(1), 169–175. Khalifa, M., Yu, A. Y., & Shen, K. N. (2008). Knowledge management systems success: a contingency perspective. Journal of Knowledge Management, 12(1), 119–132. doi:10.1108/13673270810852430 Kilmann, R. (2001). Quantum Organization. Palo Alto, CA: Davies-Black Publishing. Kim, D. H. (1993). The link between individual and organizational learning. Sloan Management Review, 35(1), 37–51. Kim, S., & Hagtvet, K. A. (2003). The impact of misspecified item parceling on representing latent variables in covariance structure modeling: A simulation study. Structural Equation Modeling, 10(1), 101–127. doi:10.1207/ S15328007SEM1001_5 King, W. R., Marks, P. V., & McCoy, S. (2002). The Most Important Issues in Knowledge Management. Communications of the ACM, 45(9), 93–97. doi:10.1145/567498.567505 King, W. R. (2006). Maybe a “knowledge culture” isn’t always so important after all! Information Systems Management, 23(1), 88–89. doi:10.1201/1078.10580530/457 69.23.1.20061201/91776.10 King, W. R. (2007). A Research Agenda for the Relationships Between Culture and Knowledge Management. Knowledge and Process Management, 14(3), 226–236. doi:10.1002/kpm.281 Kirca, A. H., Jayachandran, S., & Bearden, W. (2005). Market orientation: a meta-analytic review and assessment of its antecedents and impact on performance. Journal of Marketing, 69, 24–41. Kivijärvi, H. (2008). Aligning Knowledge and Business Strategies within an Artificial Ba. In Abou-Zeid, E.-S. (Ed.), Knowledge Management and Business Strategies: Theoretical Frameworks and Empirical Research. Hersey, PA: Information Science Reference.
Compilation of References
Kline, T. J. B., & McGrath, J.-L. (1998). Development and validation of five criteria for evaluating team performance. Organization Development Journal, 16(3), 19–27. Kline, T. J. B., & McGrath, J.-L. (1999). A Review of the Groupware Literature: Theories, Methodologies, and a Research Agenda. Canadian Psychology, 40(3), 265–271. doi:10.1037/h0086842 Know-Net. (2000). The approach, from http://www. know-net.org Kogut, B., & Zander, U. (1992). Knowledge of the firm, combinative capabilities, and the replication of technology. Organization Science, 3(3), 383–398. doi:10.1287/ orsc.3.3.383 Kogut, B. (2000). The Network as Knowledge: Generative Rules and Emergence of Structure. Strategic Management Journal, 21, 405–425. doi:10.1002/(SICI)10970266(200003)21:3<405::AID-SMJ103>3.0.CO;2-5 Kogut, B., & Zander, U. (1996). What Firms Do? Coordination, Identity, and Learning. Organization Science, 7(5), 502–518. doi:10.1287/orsc.7.5.502 Koh, E. C., Ryan, S., & Prybutok, V. R. (2005). Creating value through managing knowledge in an e-Government to constituency (G2C) environment. Journal of Computer Information Systems, 45(4), 32–42. Kohli, A. K., & Jaworski, B. J. (1990). Marketing Orientation: The Construct, Research Proposition and Managerial Implications. Journal of Marketing, 30, 1–18. Kolfschoten, G. L., Briggs, R. O., de Vreede, G. J., Jacobs, P. H. M., & Appelman, J. (2006). A Conceptual Foundation of the ThinkLet Concept for Collaboration Engineering. International Journal of Human-Computer Studies, 64(7), 611–621. doi:10.1016/j.ijhcs.2006.02.002 Kolfschoten, G. L. (2007). Theoretical Foundations for Collaboration Engineering, Dissertation, Delft University of Technology, 269.
Kong, E. (2008). The Development of Strategic Management in the Non-Profit Context: Intellectual Capital in Social Service Non-Profit Organizations. International Journal of Management Reviews, 10(3), 281–299. doi:10.1111/j.1468-2370.2007.00224.x Koskinen, K. U. (2003). Evaluation of Tacit Knowledge Utilization in Work Units. Journal of Knowledge Management, 7(5), 67–81. doi:10.1108/13673270310505395 Koskinen, K. U. (2001). Tacit Knowledge as a Promoter of Success in Technology Firms. 34th Hawaii International Conference on System Sciences, IEEE Computer Society. KPMG. (2002/2003). Insights from KPMG’s European Knowledge management Survey 2002/2003. KPMG Knowledge Advisory Services, The Netherlanders. KPMG Consulting (2000). Knowledge Management Research Report. Kramer, R. M. (2007). Organizational Trust: A Reader. USA: Oxford University Press. Kröll, M., Rath, A., Granitzer, M., Lindstaedt, S., & Tochtermann, K. (2006). Contextual Retrieval in Knowledge Intensive Business Environments. GI-Workshop Information Retrieval. Kröll, M., Rath, A., Weber, N., Lindstaedt, S., & Granitzer, M. (2007). Task Instance Classification via Graph Kernels, Mining and Learning with Graphs. Italy: ICDM Workshop. Kulkarni, U., Ravindran, S., & Freeze, R. (2006). A Knowledge Management Success Model: Theoretical Development and Empirical Validation. Journal of Management Information Systems, 23(3), 309–347. doi:10.2753/MIS0742-1222230311 Kulkarni, U., & Freeze, R. D. (2004). Development and Validation of a Knowledge Management Capability Assessment Model. In Proceedings on the Twenty-Fifth International Conference on Information Systems, Washington DC
303
Compilation of References
Lai, H., & Chu, T. (2002). Knowledge Management: A Review of Industrial Cases. Journal of Computer Information Systems, 42(5), 26–39. Lai, J.-Y., Wang, C.-T., & Chou, C.-Y. (2008). How Knowledge Map and Personalization Affect Effectiveness of KMS in High-Tech Firms. In Proceeding of the 41st Hawaii International Conference on System Sciences. Retrieved April 6, 2009 from http://www2.computer.org/plugins/ dl/pdf /proceedings/hicss/2008/3075/00/30750355.pdf Lakshman, C. (2007). Organizational knowledge leadership: a grounded theory approach. Leadership and Organization Development Journal, 28(1), 51–75. Lam, W., & Chua, A. (2005). Knowledge Management Project Abandonment: An Explanatory Examination of Root Causes. Communications of the Association for Information Systems, 16, 723–743. Lang, J. C. (2004). Social context and social capital as enablers of knowledge integration. Journal of Knowledge Management, 8(3), 89–105. doi:10.1108/13673270410541060 Lavergne, R., & Earl, R. L. (2006). Knowledge Management: A Value Creation. Journal of Organizational Culture. Communication and Conflict, 10(2), 43–60. Lee, K. C., Lee, S., & Kang, I. W. (2005). KMPI: Measuring knowledge management performance. Information & Management, 42, 469–482. doi:10.1016/j.im.2005.10.003 Lee, H., & Choi, B. (2003). Knowledge management enablers, processes, and organizational performance: An integrative view and empirical examination. Journal of Management Information Systems, 20(1), 179. Leitner, K. (2005). Managing and Reporting Intangible Assets in Research Technology Organisations. R & D Management, 35, 125–136. doi:10.1111/j.14679310.2005.00378.x Leliaert, P. J. C., Candries, W., & Tilmans, R. (2003). Identifiying and Managing IC: A New Classification. Journal of Intellectual Capital, 4(2), 202–214. doi:10.1108/14691930310472820
304
Li, T., & Calantone, R. J. (1998). The Impact of Market Knowledge Competence on New Product Advantage: Conceptualization and Empirical Examination. Journal of Marketing, 62, 13–29. Liao, C., & Chuang, S. (2006). Exploring the Role of Knowledge Management for Enhancing Firm’s Innovation and Performance. HICSS39. IEEE Computer Society. Liebman, J. S. (1998). Teaching Operations Research: Lessons from Cognitive Psychology. Interfaces, 28(2), 104–110. doi:10.1287/inte.28.2.104 Liebowitz, J., & Suen, C. Y. (2000). Developing knowledge management metrics for measuring intellectual capital. Journal of Intellectual Capital, 1(1), 54–67. Liebowitz, J. (2003-2004). A Knowledge Management Strategy for the Jason Organization: A Case Study. Journal of Computer Information Systems, 44(2), 1–5. Lin, H.-F. (2007). A stage model of knowledge management: an empirical investigation of process and effectiveness. Journal of Information Science, 33(6), 643–659. doi:10.1177/0165551506076395 Lin, T., & Huang, C. (2008). Understanding knowledge management system usage antecedents: An integration of social cognitive theory and task technology fit. Information & Management, 45(6), 410–417. doi:10.1016/j. im.2008.06.004 Lindsey, K. (2002). Measuring Knowledge Management Effectiveness: A Task Contingent Organizational Capabilities Perspective. Eight Americas Conference on Information Systems, (pp. 2085-2090). Little, T. D., Cunningham, W. A., Shahar, G., & Widaman, K. F. (2002). To parcel or not to parcel: Exploring the question, weighing the merits. Structural Equation Modeling, 9(2), 151–173. doi:10.1207/S15328007SEM0902_1 Loehlin, J. C. (2004). Latent Variable Models: An introduction to factor, path, and structural equation analysis. Mahwah, NJ: Lawrence Erlbaum Associates.
Compilation of References
Lorge, I., Fox, D., Davitz, J., & Brenner, M. (1958). A survey of studies contrasting the quality of group performance and individual performance, 1920-1957. Psychological Bulletin, 55(6), 337–372. doi:10.1037/h0042344 Luhmann, N. (1979). Trust and Power. New York: John Wiley. MacKenzie, S. B. (2003). The dangers of poor construct conceptualization. Journal of the Academy of Marketing Science, 31(3), 323–326. doi:10.1177/0092070303031003011 MacKenzie, S. B., Podsakoff, P. M., & Jarvis, C. B. (2005). The problem of measurement model misspecification in behavioral and organizational research and some recommended solutions. The Journal of Applied Psychology, 90(4), 710–730. doi:10.1037/0021-9010.90.4.710 Madjar, N., Oldham, G. R., & Pratt, M. G. (2002). There’s no place like home? The contributions of work and nonwork creativity support to employees’ creative performance. Academy of Management Journal, 45(4), 757–768. doi:10.2307/3069309 Maier, R. (2002). Knowledge Management Systems: Information and Communication Technologies for Knowledge Management. Berlin: Springer-Verlag. Mäki, E. (2008). Exploring and Exploiting Knowledge. Research on Knowledge Processes in KnowledgeIntensive Organizations, (Doctoral Dissertation). Helsinki University of Technology. Makolm, J., & Weiß, S. (2007). Forschung und Praxis Hand in Hand. In Zeitschrift des Telematik-IngenieurVerbandes TIV, Telematik 1/2007 TELEkommunikation & InforMATIK, Knowledge Worker Productivity, Web2.0 und Semantic Web: Was bringt`s für Unternehmen? (p. 12). Use Case DYONIPOS. Makolm, J., & Orthofer, G. (2007). Holistic Approach, Stakeholder Integration and Transorganizational Processes: Success Factors of FinanzOnline. In E-Taxation: State & Perspectives, E-Government in the Field of Taxation: Scientific Basis, Implementation Strategies, Good Practice Examples. Series Informatics, 21, 389-402.
Makolm, J., Weiß, S., & Reisinger, D. (2007). Proactive Knowledge Management: The DYONIPOS Research and Use Case Project. In Proceedings of the first Conference on Theory and Practice of Electronic Governance – ICEGOV 2007. (p. 85). Macao. Malhotra, Y., & Galletta, D. (2003). Role of Commitment and Motivation as Antecedents of Knowledge Management Systems Implementation. In Proceedings of the 36th Hawaii International Conference on System Sciences. Hawaii, USA: IEEE Computer Society Press. Mandviwalla, M., Eulgem, S., Mould, C., & Rao, S. V. (1998). Organizational Memory Systems Design. Unpublished Working Paper for the Task Force on Organizational Memory, Burstein, F., Huber, G., Mandviwalla, M., Morrison, J., and Olfman, L. (eds.), Presented at the 31st Annual Hawaii International Conference on System Sciences. March, J. G. (1991). Exploration and exploitation in organizational learning. Organization Science, 2(1), 71–87. doi:10.1287/orsc.2.1.71 Markus, M. L. (2001). Toward a Theory of Knowledge Reuse: Types of Knowledge Reuse Situations and Factors in Reuse Success. Journal of Management Information Systems, 18(1), 57–93. Marr, B. (2003). Known quantities. Financial Management Journal, 26-27. Martin, J. H., Martin, B. A., & Minnillo, P. R. (2009). Implementing a market orientation in small manufacturing firms: from cognitive model to action. Journal of Small Business Management, 47(1), 92–115. Massey, A. P., Montoya-Weiss, M. M., & O’Driscoll, T. M. (2002). Knowledge in the pursuit of performance: insights from Nortel Networks. Management Information Systems Quarterly, 26(3), 269–289. Matsuno, K., Mentzer, J. T., & Rentz, J. O. (2000). A refinement and validation of the MARKOR scale. Journal of the Academy of Marketing Science, 28(4), 527–539. doi:10.1177/0092070300284005
305
Compilation of References
Matthai, J. M. (1989). Employee perceptions of trust, satisfaction, and commitment as predictors of turnover intentions in a mental health setting, Unpublished Doctoral dissertation, Vanderbilt University. McAllister, D. J. (1995). Affect and cognition-based trust as foundations for interpersonal cooperation in organizations. Academy of Management Journal, 38(1), 24–59. doi:10.2307/256727 McDermott, R., & O’Dell, C. (2001). Overcoming Cultural Barriers to sharing Knowledge. Journal of Knowledge Management, 5(1), 76–85. doi:10.1108/13673270110384428 McDermott, R. (1999). Why Information Technology Inspired but Cannot Deliver Knowledge Management. California Management Review, 41(4), 103–117.
Melville, N., Kraemer, K., & Gurbaxani, V. (2004). Review: Information Technology and Organizational Performance: An Integrative Model of IT Business Value. Management Information Systems Quarterly, 28(2), 283–322. Menon, A., & Rajan, P. V. (1992). A model of marketing knowledge use within firms. Journal of Marketing, 56, 53. Michell, J. (1999). Measurement in Psychology. Cambridge, UK: Cambridge University Press. doi:10.1017/ CBO9780511490040 Microsoft (1999). Practicing Knowledge Management. Mintzberg, H. (1994). The Fall and Rise of Strategic Planning. Harvard Business Review, (January-February): 107–114.
McElroy, M. W. (2000). Integrating complexity theory, knowledge management and organizational learning. Journal of Knowledge Management, 4(3), 195–203. doi:10.1108/13673270010377652
Mintzberg, H., Ahlstrand, B. W., & Lampel, J. (1998). Strategy Safari; the complete guide through the wilds of strategic management. London: Pearson Educational.
McGrath, J. E. (1984). Groups: Interaction and performance. Englewood Cliffs, NJ: Prentice-Hall.
Mintzberg, H. (1978). Patterns of Strategy Formulation. Management Science, 24(9), 934–948. doi:10.1287/ mnsc.24.9.934
McKay, J., & Marshall, P. (2004). Strategic Management of e-Business. New York: John Wiley & Sons. McKeen, J. D., Zack, M. H., & Singh, S. (2006). Knowledge Management and Organizational Performance: An Exploratory Survey. In Proceedings of the 39th Hawaii International Conference on System Sciences. IEEE Computer Society Press. McKnight, D. H., Choudhury, V., & Kacmar, C. (2002). Developing and Validating Trust Measures for e-Commerce: An Integrative Typology. Information Systems Research, 13(3), 334–359. doi:10.1287/isre.13.3.334.81 McKnight, H. D., & Chervany, N. L. (2000, August 10-13). What is Trust? A Conceptual Analysis and an Inderdisciplinary Model. Paper presented at the Americas Conference on Information Systems (AMCIS), Long Beach California.
306
Mintzberg, H., & Waters, J. A. (1985). Of Strategies, Deliberate and Emergent. Strategic Management Journal, 6(3), 257–272. doi:10.1002/smj.4250060306 Mittleman, D. D., Briggs, R. O., Murphy, J., & Davis, A. (2008). Toward a Taxonomy of Groupware Technologies, Presented at 14th Collaboration Researchers’ International Workshop on Groupware, Retrieved April 17, 2009 from http://ihop.typepad.com/docs/ criwg2008. pdf and Appendix from http://ihop.typepad.com/docs/ webfacilitationtools.xls. Moon, Y. J., & Kym, H. G. (2006). A Model for the Value of Intellectual Capital. Canadian Journal of Administrative Sciences, 23(3), 253–269. Muhammed, S., Doll, W. J., & Deng, X. (2008). Exploring the Relationships among Individual Knowledge Management Outcomes. In Proceedings of the 41st Annual Hawaii International Conference on System Sciences, Computer Society Press.
Compilation of References
Murray, P. (1998). The Cranfield/Information Strategy Knowledge Survey; Europe’s State of the Art in Knowledge Management. London: The Economist Group. Murthy, U. S., & Kerr, D. S. (2000). Task/Technology Fit and the Effectiveness of Group Support Systems: Evidence in the Context of Tasks Requiring Domain Specific Knowledge. In Proceedings of the 33rd Annual Hawaii International Conference on System Sciences. Computer Society Press. Myers, P. (Ed.). (1996). Knowledge management and organizational design. Boston: Butterworth–Heinemann. Narver, J. C., & Slater, S. F. (1990). The effect of market orientation on business profitability. Journal of Marketing, 54(4), 20–35. Narver, J. C., & Slater, S. F. (1995). Market Orientation and the Learning Organization. Journal of Marketing, 59(3), 63–74. Natarajan, G., & Shekhar, S. (2000). Knowledge management: Enabling Business growth. New Delhi: Tata McGraw-Hill. Neely, A. (2007). Business Performance Measurement. Cambridge, UK: Cambridge University Press. doi:10.1017/CBO9780511488481 Nelson, K. M., & Cooprider, J. G. (1996). The Contribution of Shared Knowledge to IS Group Performance. MIS Quaterly, 20(4), 409–432. doi:10.2307/249562 Netemeyer, R. G., Krishnan, B., Pullig, C., Wang, G., Yagei, M., & Dean, D. (2004). Developing and validating measures of facets of customer-based brand equity. Journal of Business Research, 57, 209–224. doi:10.1016/ S0148-2963(01)00303-4 Niederman, F., Beise, C. M., & Beranek, P. M. (1996). Issues and Concerns about Computer-Supported Meetings: the Facilitator’s Perpsective. Management Information Systems Quarterly, 20(1), 1–22. doi:10.2307/249540
Nissen, M., & Jennex, M. (2007). Toward Multidimensional Conceptualization of Knowledge. In Jennex, M. E. (Ed.), Knowledge management in Modern Organizations (pp. 278–284). Hershey, PA: Idea Group Publishing. Nöcker, R. (1999). Erfolg von Unternehmen aus Betriebswirtschaflicher Sicht. Unternehmerisch erfolgreiches Handeln,(pp. 53-66). Nonaka, I. (1994). A Dynamic Theory of Organizational Knowledge Creation. Organization Science, 5(1), 14–37. doi:10.1287/orsc.5.1.14 Nonaka, I., & Takeuchi, H. (1997). The knowledgecreating company. Oxford, UK: Oxford University Press. Nonaka, I., & Takeuchi, H. (1995). The KnowledgeCreating Company: How Japanese Companies Create the Dynamics of Innovation. New York: Oxford University Press. Nonaka, I. (1994). A Dynamic Theory of Organizational Knowledge Creation. Organization Science, 5(1), 14–37. doi:10.1287/orsc.5.1.14 Nunamaker, J. F., Dennis, A. R., Valacich, J. S., Vogel, D. R., & George, J. F. (1991). Electronic Meeting Systems to Support Group Work. Communications of the ACM, 34(7), 40–61. doi:10.1145/105783.105793 Nunamaker, J. F., Jr., Briggs, R. O., & Romano, N., Jr. (2001). A Framework for Collaboration and Knowledge Management. In Proceeding of the 34th Hawaii International Conference on System Sciences, Retrieved April 17, 2009 from http://www2.computer.org/plugins/dl/pdf/ proceedings /hicss/2001/0981/01/09811060.pdf Nunnally, J. (1978). Psychometric theory (2nd ed.). New York: McGraw Hill. O’Dell, C., & Grayson, C. J. (1998). If only we knew what we know: Identification and transfer of internal best practices. California Management Review, 40(3), 154–174. O’Dell, C. (2004). The executive role in knowledge management: American Productivity and Quality Center. APQC.
307
Compilation of References
O’Dell, C., Wiig, K., & Odem, P. (1999). Benchmarking unveils emerging knowledge management strategies. Benchmarking: An International Journal, 6(3), 202–211. doi:10.1108/14635779910288550 OECD. (2000). Knowledge Management in the Learning Society. France: OECD Publisher. OECD. (2003). Knowledge Management: Measuring Knowledge Management in the Business Sector-First Steps. Paris & Ministry of Canada: Center for Educational Research & Statistics Canada (OECD). Okunoye, A., & Karsten, H. (2002). ITI as Enabler of Knowledge Management: Empirical Perspective from Research Organisations in subSaharan Africa. HICSS32. IEEE Computer Society.
Pearl, J. (1990). Bayesian Decision Methods. In Shafer, G., & Pearl, J. (Eds.), Readings in Uncertain Reasoning (pp. 345–352). San Francisco: Morgan Kaufman Publishers. Pearson, T. (1990). Measurement and the Knowledge Revolution. In Cortada, J., & Woods, J. (Eds.), The Knowledge Management Yearbook. Boston: ButterworthHeinemann. Penrose, E. T. (1959). The theory of the growth of the firm. New York: Wiley and Sons. Peteraf, M. A., & Barney, J. B. (2003). Unraveling the Resource-Based Tangle. Managerial and Decision Economics, 24(4), 309–323. doi:10.1002/mde.1126 Peters, T. (1998). O Círculo da inovação. São Paulo, Brazil: Harbra.
Oldham, G. R., & Cummings, A. (1996). Employee creativity: Personal and contextual factors at work. Academy of Management Journal, 39(3), 607–635. doi:10.2307/256657
Pfeffer, J., & Sutton, R. I. (1999). Knowing ‘what’ to do is not enough: Turning knowledge into action. California Management Review, 42(1), 83–109.
Olson, E. (1995). Organization for Effective New Product Development: The Moderating Role of Product Innovativeness. Journal of Marketing, 59, 48–62.
Pfeffer, J. (2005). Understanding the Role of Power in Decision Making. In Shafritz, J. M., Ott, J. S., & Jang, Y. S. (Eds.), Classics of Organization Theory (6th ed., pp. 289–303). Belmont, CA: Thomson Wadsworth.
OPM, Office of Personnel Management. (2004, June). FedScope. Office of Personnel Management. Retrieved December 20, 2005, from http://www.fedscope.opm. gov/index.asp Parasuraman, A. (2000). Technology Readiness Index (TRI). Journal of Service Research, 2(4), 307–320. doi:10.1177/109467050024001 Pauleen, D., & Mason, D. (2002). New Zealand Knowledge Management Survey: Barriers and Drivers of KM Uptake Retrieved January 10, 2004, from http://www. nzkm.net/mainsite/NewZealandKnowledgeManagementSurveyBarriersandDriv.html Paulzen, O., & Perc, P. (2002). A maturity model for quality improvment in knowledge management. In Proceedings of the 13th Australasian Conference on Information Systems (ACSIS), 243-253.
308
Pike, S., Göran, R., & Marr, B. (2005). Strategic Management of Intangible Asset and Value Drivers in R & D Organizations. R & D Management, 35(2), 111–124. doi:10.1111/j.1467-9310.2005.00377.x Ping, R. A. Jr. (2004). On assuring valid measures for theoretical models using survey data. Journal of Business Research, 2004, 125–141. doi:10.1016/S01482963(01)00297-1 Pinsonneault, A., & Kraemer, K. L. (1989). The Impact of Technological Support on Groups: An Assessment of the Empirical Research. Decision Support Systems, 5(2), 197–216. doi:10.1016/0167-9236(89)90007-9 Podsakoff, P. M., MacKenzie, S. B., Lee, J. Y., & Podsakoff, N. P. (2003). Common method biases in behavioral research: A critical review of the literature and recommended remedies. The Journal of Applied Psychology, 88(5), 879–903. doi:10.1037/0021-9010.88.5.879
Compilation of References
Polanyi, M. (1967). The Tacit Dimension. London: Routledge. Polanyi, M. (1975). Personal Knowledge. Chicago: University of Chicago Press. Politis, J. D. (2003). The connection between trust and knowledge management: what are its implications for team performance. Journal of Knowledge Management, 7(5), 55–66. doi:10.1108/13673270310505386 Pomerol, J., Brezillon, P., & Pasquier, L. (2002). Operational knowledge representation for practical decisionmaking. Journal of Management Information Systems, 18(4), 101–115. Porter, M. (1995). Vantagem Competitiva. Rio de Janeiro, Brazil: Campus. Prahalad, C., & Hamel, G. (1990). The Core Competence of the Corporation. Harvard Business Review, 90, 79–91. Priem, R. L., & Butler, J. E. (2001). Tautology in the Resourced-Based View and the Implications of Externally Determined Resource Value: Futher Comments. Academy of Management Review, 26, 57–66. doi:10.2307/259394 Probst, G., Raub, S., & Romhardt, K. (2000). Managing Knowledge; Building Blocks for Success. Chichester, UK: John Wiley & Sons, Ltd. Probst, G., Raub, S., & Romhard, K. (2006). Wissen managen: Wie Unternehmen ihre wertvollste Ressource optimal nutzen. Berlin, Germany: Gabler Verlag. Probst, G., & Romhardt, K. (n.d.). Bausteine des Wissensmanagements - ein praxisorientierter Ansatz. Retrieved from http://wwwai.wu-wien.ac.at/~kaiser/seiw/ Probst-Artikel.pdf. Quaduss, M., & Xu, J. (2005). Adoption and difussion of knowledge management systems: Field studies of factors and variables. Knowledge-Based Systems, 18(2001), 107-115.
Rao, M. (2002, August, 23). Eight Keys to Successful KM practice Retrieved February, 1, 2003, from http://www. destinationkm.com/articles/default.asp?ArticleID=990 Rath, A. (2007). A Low-Level Based Task And Process Support Approach for Knowledge-Intensive Business Environments. In Proceedings of the 5th International Conference on Enterprise Information System Doctoral Consortium DCEIS 200 (pp. 35-42). Madeira, Portugal. June 12, 2007. Rath, A., Kröll, M., Andrews, K., Lindstaedt, S., Granitzer, M., & Tochtermann, K. (2006). Synergizing Standard and Ad-Hoc Processes. In Proceedings of the 6th International Conference on Practical Aspects of Knowledge Management. Vienna, Austria: LNCS Springer. Rath, A., Kröll, M., Lindstaedt, S., & Granitzer, M. (2007). Low-Level Event Relationship Discovery for Knowledge Work Support. In Proceedings of the 4th Conference on Professional Knowledge Management, “ProKW2007 Productive Knowledge Work: Management and Technological Challenges”. Potsdam, Germany. GITO-Verlag. Berlin, March 28.–30, 2007. Reber, A. S. (1989). Implicit Learning and Tacit Knowledge. Journal of Experimental Psychology. General, 118(3), 219–235. doi:10.1037/0096-3445.118.3.219 Reed, K. K., Lubatkin, M., & Srinivasan, N. (2006). Proposing and Testing an Intellectual Capital-Based View of the Firm. Journal of Management Studies, 43, 867–893. doi:10.1111/j.1467-6486.2006.00614.x Renzl, B. (2008). Trust in management and knowledge sharing: The mediating effects of fear and knowledge documentation. Omega, 36, 206–220. doi:10.1016/j. omega.2006.06.005 Ribière, V. (2001). Assessing Knowledge Management Initiative Successes as a Function of Organizational Culture. Unpublished D.Sc. Dissertation, The George Washington University, Washington DC.
Quintas, P., Lefrere, P., & Jones, G. (1997). Knowledge Management: a Strategic Agenda” in Long Range Planning. London: Elsevier Science Ltd. 385-391.
309
Compilation of References
Ribière, V. (2005). The critical role of trust in knowledge management (Le rôle primordial de la confiance dans les démarches de gestion du savoir). Unpublished PhD dissertation (Management Sciences) Université Paul Cézanne, Aix en Provence (France) - available on Proquest. Riege, A. (2005). Three-dozen knowledge-sharing barriers managers must consider. Journal of Knowledge Management, 9(3), 18–35. Riempp, G. (2004). Integrierte WissensmanagementSysteme – Architektur und Praktische Anwendung. Berlin, Germany: Springer. Rigby, D., & Bilodeau, B. (2007). Management Tools and Trends 2007. Bain & Company. Rigdon, E. E. (1998). Structural equation modeling. In Marcoulides, G. A. (Ed.), Modern Methods for Business Research (pp. 251–293). Robbins, S. (2005). We need a new vocabulary. Information Systems Management, 22(1), 89–90. doi:10.1201/1 078/44912.22.1.20051201/85744.12 Rockert, J. (1979). Chief Executives Define Their Own Data Needs. Harvard Business Review, 57(2), 81–93. Rogers, S. B., McDonald, K. D., & Brown, V. A. (2005). CFOs Positioned to Drive BI Integration. Financial Executive, 21(7), 46. Rolland, N., & Chauvel, D. (2000). Knowledge Transfer in Strategic Alliances. In Despres, C., & Chauvel, D. (Eds.), Knowledge Horizons (pp. 225–236). ButterworthHeinemann. doi:10.1016/B978-0-7506-7247-4.50014-8 Ronen, Y. (1988). The Role of Uncertainties. In Ronen, Y. (Ed.), Uncertainty Analysis (pp. 2–39). Boca Raton, FL: CRC Press. Roos, G., & Roos, J. (1997). Measuring your Company’s Intellectual Performance. Long Range Planning, 30(3), 413–426. doi:10.1016/S0024-6301(97)90260-0
Rouse, M. J., & Daellenbach, U. S. (1999). Rethinking Research Methods for the Resource-Based Perspective: Isolating Sources of Sustainable Competitive Advantage. Strategic Management Journal, 20, 487–494. doi:10.1002/(SICI)1097-0266(199905)20:5<487::AIDSMJ26>3.0.CO;2-K Ruggles, R. (1998). The state of the notion: Knowledge management in practice. California Management Review, 40(3), 80–89. Rulke, D. L., & Galaskiewicz, J. (2000). Distribution of knowledge, group network structure, and group performance. Management Science, 46(5), 612–625. doi:10.1287/mnsc.46.5.612.12052 Sage, A. P., & Rouse, W. B. (1999). Information Systems Frontiers in Knowledge Management. Information Systems Frontiers, 1(3), 205–219. doi:10.1023/A:1010046210832 Saint-Onge, H. (1996). Tacit Knowledge. The Key to the Strategic Alignement of Intellectual Capital. Strategy and Leadership, 24, 10–14. doi:10.1108/eb054547 Sako, M. (2006). Does trust improves business performance? In Kramer, R. M. (Ed.), Organizational Trust: A Reader (pp. p267–p294). Oxford University Press. Santhanam, R., & Hartono, E. (2003). Issues in Linking Information Technology Capability to Firm Performance. Management Information Systems Quarterly, 27(1), 125–153. Sarker, S., Nicholson, D. B., & Joshi, K. D. (2005). Knowledge Transfer in Virtual System Development Teams. IEEE Transactions on Professional Communication, 48(2), 201–218. doi:10.1109/TPC.2005.849650 Satyadas, A., Harigopal, U., & Cassaigne, N. P. (2001). Knowledge Management Tutorial: An Editorial Overview. IEEE Transactions on Systems, Man and Cybernetics. Part C, Applications and Reviews, 31(4), 429–437. doi:10.1109/5326.983926 Savage, C. M. (1996). Fifth generation management: co-creating through virtual enterprising, dynamic teaming and knowledge networking. Boston: ButterworthHeinemann.
310
Compilation of References
Scharmer, C. O. (2001). Self-transcending Knowledge: Organizing Around Emerging Realities. In Nonaka, I., & Teece, D. J. (Eds.), Managing Industrial Knowledge (pp. 68–90). London: Sage Publications.
Segars, A. (1997). Assessing the unidimensionality of measurement: A paradigm and illustration within the context of information systems research. Omega. International Journal of Management Science, 25(1), 107–121.
Schein, E. H. (2001). Defining organizational culture. In Classics of Organization Theory (pp. 369–379). Fort Worth, FL: Harcourt College.
Sethi, V., & King, W. R. (1991). Construct measurement in information systems research: An illustration in strategic systems. Decision Sciences, 22(3), 455–464. doi:10.1111/j.1540-5915.1991.tb01274.x
Schmalen, C., Kunter, M., & Weindlmaier, H. (2005). Theoretische Grundlagen, Methodische Vorgehensweise und Anwendungserfahrung in Projekten für die Ernährungsindustrie. In Proceedings der 45. Tagung für Gesellschafts- und Sozialwissenschaften des Landbaues Göttingen. Erfolgsfaktorenforschung. Schoorman, D. F., Mayer, R. C., & Davis, J. H. (2007). An integrative model of organizational trust: Past, Present and Future. Academy of Management Review, 32(2), 344–354. Schultze, U., & Stabell, C. (2004). Knowing What You Don’t Know? Discourses and Contradictions in Knowledge Management. Journal of Management Studies, 41(4), 549–573. Schultze, U., & Leidner, D. (2002). Studying knowledge management in information systems research: Discourses and theoretical assumptions. Management Information Systems Quarterly, 26(3), 213–242. doi:10.2307/4132331 Schulz, M., & Jobe, L. A. (2001). Codification and tactiness as knowledge management strategies: An empirical exploration. [Article]. The Journal of High Technology Management Research, 12(1), 139–165. doi:10.1016/ S1047-8310(00)00043-2 Schwarz, R. (2002). The Skilled Facilitator. A Comprehesive Resource for Consultants, Facilitators, Managers, Trainers and Coaches. San Francisco: Jossey-Bass. Scott, S. G., & Bruce, R. A. (1994). Determinants of innovative behavioral path model of individual innovation in the workplace. Academy of Management Journal, 37(3), 580–607. doi:10.2307/256701 Seddon, P. A. (1997). Respecification and Extension of the Delone and Mclean Model of Is Success. Information Systems Research, 8(3), 240–153. doi:10.1287/isre.8.3.240
Shaw, R. B. (1997). Trust in the Balance. Building Successful Organizations on Results, Integrity, and Concern. San Francisco: Jossey-Bass. Sher, P. J., & Lee, V. C. (2004). Information technology as a facilitator for enhancing dynamic capabilities through knowledge management. Information & Management, 41(8), 933–945. doi:10.1016/j.im.2003.06.004 Sheskin, D. J. (2004). Handbook of parametric and nonparametric statistical procedures. (3 ed.). Boca Raton, FL: Chapman & Hall/CRS. Shevlin, M., Miles, J. N. V., Davies, M. N. O., & Walker, S. (2000). Coefficient alpha: A useful indicator of reliability? Personality and Individual Differences, 28, 229–237. doi:10.1016/S0191-8869(99)00093-8 Shin, M. (2004). A framework for evaluating economics of knowledge management systems. Information & Management, 42, 179–196. Simon, H., & Kaplan, C. (1989). Foundations in Cognitive Science. Cambridge, MA: The MIT Press. Sistare, H. (2004). Government Reorganization: Strategies and Tools to Get it Done, 2004 Presidential Transition Series. Washington, DC: IBM Center for The Business of Government. Skyrme (1998). Measuring the Value of Knowledge. Metrics for the Knowledge-Based Business. London: Business Intelligence. Slater, S. F., & Narver, J. C. (1994). Does Competitive Environment Moderate the Market Orientation–Performance Relationship? Journal of Marketing, 58(January), 46–55.
311
Compilation of References
Slywotzky, A. (1996). Value Migration. Boston: Harvard Business School Press.
Stein, E. W., & Zwass, V. (1995). Actualizing Organizational Memory With Information Systems. Information Systems Research, 6(2), 85–117. doi:10.1287/isre.6.2.85
Smith, R. G., & Farquhar, A. (2000). The road ahead for knowledge management, an AI perspective. AI Magazine, 21(4), 17–40.
Steiner, I. D. (1972). Group Process and Productivity. New York: Academic Press.
Smith, T. W. (1983). The hidden 25 percent: An analysis of nonresponse on the 1980 general social survey. Public Opinion Quarterly, 47(3), 386–404. doi:10.1086/268797
Stenmark, D. (2001). Leveraging Tacit Organizational Knowledge. Journal of Management Information Systems, 17(3), 9–24.
Smith, H. A., McKeen, J. D., & Singh, S. (2007). Tacit Knowledge Transfer: Making It Happen. Journal of Information Science & Technology, 3(3), 50–72.
Stewart, T. (1997). Intellectual Capital. The New Wealth of Organizations. New York: Doubleday Currency.
Smits, M., & deMoor, A. (2004). Measuring Knowledge Management Effectiveness in Communities of Practice. Paper presented at the 37th Hawaii International Conference on System Sciences, Hawaii. Smolnik, S. (2006). Wissensmanagement mit Topics Maps in Kollaborativen Umgebungen – Identifikation, Explikation und Visualisierung von Semantischen Netzwerken in Organisationalen Gedächtnissen. Berlin, Germany: Shaker. Soh, C., & Markus, M. L. (1995). How IT creates business value: A process theory synthesis. In Proceedings of the Sixteenth International Conference on Information Systems, (pp. 29-41). Spector, P. E. (1992). Summated ratings scales construction. Newbury Park, CA: Sage. Spender, J. C. (1996). Organizational knowledge, learning and memory: three concepts in search of a theory. Journal of Organizational Change Management, 9(1), 63–78. Spitzer, D. (2007). Transforming Performance Measurement. New York: American Management Association. Standards Australia. (2005). AS 5037-2005 Knowledge management - a guide (2nd ed.). Sydney: Standards Australia. Stehr, N. (2003). The Governance of Knowledge. New York: Transaction Publishers.
312
Stewart, T. (1991). Brainpower. Fortune, 123, 44–50. Stivers, B. P., Covin, T. J., Hall, N. G., & Smalt, S. W. (1998). How nonfinancial performance measures are used. Management Accounting (USA), 79(8), 44–48. Storey, J., & Barnett, E. (2000). Knowledge management Initiatives: Learning from Failure. Journal of Knowledge Management, 2(4), 145–156. doi:10.1108/13673270010372279 Straub, D., Boudreau, M.-C., & Gefen, D. (2004). Validation guidelines for IS positivist research. Communications of the ACM, 13, 380–427. Subramaniam, M., & Youndt, M. A. (2005). The Influence of Intellectual Capital on the Types of Innovative Capabilities. Academy of Management Journal, 48, 450–463. Sun, P. Y.-T., & Scott, J. L. (2005). An investigation of barriers to knowledge transfer. Journal of Knowledge Management, 9(2), 75–90. Sveiby, K. (1997). The New Organizational Wealth: Managing and Measuring Knowledge-Based Assets. San Francisco: Berrett-Koehler. Sveiby, K. E. (1997). The New Organizational Wealth – Managing and Measuring Knowledge-based Assets. San Francisco: Berrett-Kehler Publishers, Inc. Sveiby, K. E. (2001, July 2004). Methods for Measuring Intangible Assets. Retrieved August, 3, 2004, from http:// wwwsveiby.com/articles/IntangibleMethods.htm
Compilation of References
Swan, J., Newell, S., Scarbrough, H., & Hislop, D. (1999). Knowledge Management and innovation:networks and networking. Journal of Knowledge Management, 3(4), 262–275. doi:10.1108/13673279910304014 Swap, W., Leonard, D., Shields, M., & Abrams, L. (2001). Using Mentoring and Storytelling to Transfer Knowledge in the Workplace. Journal of Management Information Systems, 18(1), 95–114. Swart, J., & Kinnie, N. (2003). Sharing knowledge in knowledge-intensive firms. Human Resource Management Journal, 13, 60. Tabachnick, B. G., & Fidell, L. S. (2001). Using Multivariate Statistics (4th ed.). New York: Allyn and Bacon. Tan, H. H., & Lim, A. K. H. (2009). Trust in Coworkers and Trust in Organizations. [Article]. The Journal of Psychology, 143(1), 45–66. doi:10.3200/JRLP.143.1.45-66 Tanriverdi, H. (2005, June). Information technology relatedness, knowledge management capability, and performance of multibusiness firms. Management Information Systems Quarterly, 29(2), 311–334. Tanudjojo, S., & Braganza, A. (2005). Overcoming Barriers To Knowledge Flow: Evidence-Based Attributes Enabling The Creation, Mobilization, and Diffusion of Knowledge. Paper presented at the 38th Hawaii International Conference on System Sciences (HICSS), Hawaii. Tapscott, D. (2006). Winning with the Enterprise 2.0, IT and Collaborative Advantage, New Paradigm, Retrieved April 17, 2009 from http://newparadigm.com/media/ Winning_with_the_Enterprise_2.0.pdf Teece, D. J. (1998). Capturing value from knowledge assets: The New Economy, Markets for Know-How, And Intangible Assets. California Management Review, 40(3). Teoh, K. K., & Avvari, M. (2004). Integration of TAM Based Electronic Commerce Models for Trust. Journal of American Academy of Business, 5(1/2), 404–410.
Thomas, B. D. (2006). An Empirical Investigation of Factors Promoting Knowledge Management System Success. Doctoral Dissertation, Texas Tech University, Retrieved April 17, 2009 from http://etd.lib.ttu.edu/ theses/available/etd-07072006-105657/unrestricted/ Thomas_Bobby_Diss.pdf Tiwana, A. (2002). The Knowledge Management Toolkit: Orchestrating IT, Strategy, and Knowledge Platforms (2nd ed.). Upper Saddle River, NJ: Prentice Hall. Tiwana, A. (2000). The Knowledge Management Toolkit: Practical Techniques for Building Knowledge Management System. Upper Saddle River, NJ: Prentice Hall. Tiwana, A., & Mclean, E. R. (2005). Expertise Integration and Creativity in Information Systems Development. Journal of Management Information Systems, 22(1), 13–43. Treacy, M., & Wiersema, F. (1995). The Discipline Of Market Leaders. Reading, MA: Addison-Wesley. Triandis, H. C. (1980). Beliefs, Attitudes, and Values. Lincoln, NE: University of Nebraska Press. Tsoukas, H., & Vladimirou, E. (2001). What is Organizational Knowledge? Journal of Management Studies, 38(7), 973–993. doi:10.1111/1467-6486.00268 Tsui, E. (2005). The role of IT in KM: Where are we now and where are we heading? Journal of Knowledge Management, 9(1), 3–6. doi:10.1108/13673270510584198 Tunney, R. J. (2003). Implicit and Explicit Knowledge Decay at Different Rates: A Dissociation Between Priming And Recognition in Artificial Grammar Learning. Experimental Psychology, 50(2), 124–130. doi:10.1026//16183169.50.2.124 Tuomi, I. (1999). Data is More than Knowledge: Implications of the Reversed Hierarchy for Knowledge Management and Organizational Memory. In Proceedings of the Thirty-Second Hawaii International Conference on Systems Sciences. Hawaii, IEEE Computer Society Press, Los Alamitos, CA.
313
Compilation of References
Turban, E., & Aronson, J. E. (2001). Decision Support Systems and Intelligent Systems (6th ed.). Upper Saddle River, NJ: Prentice Hall. Turban, E., Aronson, J. E., Liang, T.-P., & Sharda, R. (2007). Decision Support and Business Intelligence Systems. Upper Saddle River, NJ: Prentice Hall. Van De Ven, A. (1986). Central problems in the management of innovation. Management Science, 32(5), 570–607. doi:10.1287/mnsc.32.5.590 Van den Berghe, L., & De Ridder, L. (1999). International Standardisation of Good Corporate Governance: Best Practices for the Board of Directors. Boston: Kluwer Academic Publishers. Van Grembergen, W., De Haes, S., & Van Brempt, H. (2007). Strong Consensus on the Most Important Business and IT Goals. COBIT Focus, 3, 1–3. Vance, D. M. (1997). Information, Knowledge and Wisdom: The Epistemic Hierarchy and Computer-Based Information System” in Americas Conference on Information Systems. Indianapolis. In Proceedings of the Third Americas Conference on Information Systems.
Waller, R. J. (1975). Application of Interpretive Structural Modeling to Priority-Setting in Urban Systems Management. Portraits of Complexity (Battelle Monograph No. 9) (Baldwin, M., Ed.). Columbus, OH: Battelle Memorial Institute. Walsh, J. P., & Ungson, G. R. (1991). Organizational Memory. Academy of Management Review, 16(1), 57–91. doi:10.2307/258607 Ward, J., & Aurum, A. (2004). Knowledge Management in Software Engineering – Describing the Process. ASWEC 2004. IEEE Computer Society. Warfield, J. N. (1973). Intent Structures. IEEE Transactions on Systems, Man, and Cybernetics, 3(2). Weber, F., Wunram, M., Kemp, J., Pudlatz, M., & Bredehorst, B. (2002). Towards a common KM framework in Europe. In Proceedings of Unicom Seminar. Standardization in Knowledge Management. Weber, M. (2005). Bureaucracy. In Shafritz, J. M., Ott, J. S., & Jang, Y. S. (Eds.), Classics of Organization Theory (6th ed., pp. 73–78). Belmont, CA: Thomson Wadsworth.
Velicer, W. F., & Fava, J. L. (1998). Effects of variable and subject sampling on factor pattern recovery. Psychological Methods, 3(2), 231–251. doi:10.1037/1082-989X.3.2.231
Weerdmeester, R., Pocaterra, C., & Hefke, M. (2003). D 5.2. Knowledge Management Maturity Model (No. ST-2002-38513): INFORMATION SOCIETIES TECHNOLOGYo. Document Number.
Venkatraman, N. (1989). Strategic orientation of business enterprises. Management Science, 35(8), 942–962. doi:10.1287/mnsc.35.8.942
Weill, P., & Ross, J. (2004). IT governance: how top performers manage IT decision rights for superior results. Boston: Harvard Business School Press.
Verhoef, P. C., & Leeflang, P. S. H. (2009). Understanding the marketing department’s influence within the firm. Journal of Marketing, 73, 14–37.
Welch, J. (1993, Jan 25). Jack Welch’s lessons for success. Fortune, 127, 86–91.
von Krogh, G. (1998). Care in knowledge creation. California Management Review, 40(3), 133–153. Wade, M., & Hulland, J. (2004). Review: The ResourceBased View and Information Systems Research: Review, Extension, and Suggestions for Future Research. Management Information Systems Quarterly, 28(1), 107–142.
314
Wernerfelt, B. (1984). A Resource-Based View of the Firm. Strategic Management Journal, 5(2), 171–180. doi:10.1002/smj.4250050207 Wick, D. (1995). The Infamous Boundary: Seven Decades of Controversy in Quantum Physics. Boston: Birkhäuser. Wick, C. (2000). Knowledge management and leadership opportunities for technical communicators. Technical Communications, 47(4), 515–529.
Compilation of References
Wiig, K. M. (2004). People-focused Knowledge Management: How Effective Decision Making Leads to Corporate Success. Burlington, MA: Elesevier Butterworth Heinemann. Wiig, K., & Jooste, A. (2004). Chapter 45: Exploiting knowledge for productivity gains. In C. W. Holsapple (Eds.), Handbook on knowledge management vol.2: Knowledge directions, 289-308. New York: Springer Science and Business Media. Williams, S. (2004). Building and repairing trust Retrieved October, 2008, from http://www.wright.edu/~scott.williams/LeaderLetter/trust.htm
Yu, S.-H., Kim, Y.-G., & Kim, M.-Y. (2004). Linking Organizational Knowledge Management Drivers to Knowledge Management Performance: An Exploratory Study. In Proceedings of the 37th Hawaii International Conference on System Sciences. Hawaii, USA: IEEE Computer Society Press. Yung, Y. F., Thissen, D., & McLeod, L. D. (1999). On the Relationship Between the Higher-Order Factor Model and the Hierarchical Factor Model. Psychometrika, 64(2), 113–128. doi:10.1007/BF02294531 Zack, M. H. (1999). Managing Codified Knowledge. Sloan Management Review, 40(4), 45–58.
Wilson, R., & Keil, F. (1999). The MIT Encyclopedia of the Cognitive Sciences. Cambridge: The MIT Press.
Zack, M. H. (1999). Developing a Knowledge Strategy. California Management Review, 41(3), 125–145.
Wong, K. (2005). Critical success factors for implementing knowledge management in small and medium enterprises. Industrial Management & Data Systems, 105(3), 261–279. doi:10.1108/02635570510590101
Zack, M. H., & Michael, S. (1998). Knowledge Management and Collaboration Technologies, from http://www. lotus.com/services/institute.nsf/550137bfe37d25a18525 653a005e8462/000021ca
Workman, J. P., Homburg, C., & Gruner, K. (1998). Marketing organization: an integrative framework of dimensions and determinants. Journal of Marketing, 62, 21–41.
Zack, M. H., & Street, C. (2007, June). A Framework for Assessing the Impact of Knowledge on Firm Performance. Paper presented at The International Conference on Organizational Learning, Knowledge, and Capabilities, University of Western Ontario
Wu, J., & Wang, Y. (2006). Measuring KMS success: A respecification of the Delone and Mclean’s model. Information & Management, 43(6), 728–739. doi:10.1016/j. im.2006.05.002 Wu, J. (2008). Exploring the link beween knowledge management performance and firm performance. Lexington, Kentucky: University of Kentucky. Yang, Z., Cai, S., Zhou, Z., & Zhou, N. (2005). Development and validation of an instrument to measure user perceived service quality of information presenting web portals. Information & Management, 42, 575–589. doi:10.1016/S0378-7206(04)00073-4 Yoshioka, T., Herman, G., Yates, J., & Orlikowski, W. J. (2001). Genre taxonomy: A knowledge repository of communicative actions. ACM Transactions on Information Systems, 19(4), 431–456. doi:10.1145/502795.502798
Zack, M. H., McKeen, J. D., & Singh, S. (2006). Knowledge Management and Organizational Performance: An Exploratory Survey. Paper presented at the Thirty-Ninth Hawaii International Conference on System Sciences, Hawaii. Zaheer, A., McEvily, B., & Perrone, V. (1998). Does Trust Matter? Exploring the Effects of Interorganizational and Interpersonal Trust on Performance. Organization Science, 9(2), 141–159. doi:10.1287/orsc.9.2.141 Zand, D. E. (1997). The leadership Triad - Knowledge, Trust, and Power. New York, NY: Oxford University Press. Zhu, K. (2004). The complementarity of information technology infrastructure and e-commerce capability: A resource-based assessment of their business value. Journal of Management Information Systems, 21(1), 167–202.
315
Compilation of References
Zigurs, I., & Buckland, B. K. (1998). A Theory of Task/ Technology Fit and Groups Support Systems Effectiveness. Management Information Systems Quarterly, 22(3), 313–334. doi:10.2307/249668 Zigurs, I., Buckland, B.K., Connolly, J.R., & Wilson, E.V. (1999). A Test of Task-Technology Fit Theory of Group Support Systems. The Data Base for Advances in Information Systems, 30(3,4), 34-50.
Zyngier, S., Burstein, F., & McKay, J. (2005). Governance of Strategies to Manage Organizational Knowledge - a mechanism to oversee knowledge needs. In Jennex, M. (Ed.), Knowledge Management Case Book. Hershey, PA: Idea Group Books. Zyngier, S. (2005). Knowledge Management Governance. In Schwarz, D. (Ed.), The Encyclopedia of Knowledge Management. Hershey, PA: Idea Group Publishing.
Zohar, D. (1997). Rewiring the Corporate Brain. Using the New Science to Rethink How We Structure and Lead Organizations. San Francisco: Berrett-Koehler.
Zyngier, S. (2002). Knowledge Management Obstacles in Australia. Paper presented at the 10th European Conference on Information Systems, Gdan’sk, Poland.
Zolingen, S. J., Van, Streumer, J. N., & Stooker, M. (2001). Problems in Knowledge Management: A CaseStudy of a Knowledge-Intensive Company. International Journal of Training and Development, 5(3), 168–184. doi:10.1111/1468-2419.00130
Zyngier, S. (2007). Knowledge Management Governance: a Framework for Knowledge Management Benefits Realization. Paper presented at the 8th International Research Conference on Quality, Innovation and Knowledge Management, New Delhi, India.
Zyngier, S. (2008). Risk management: Strengthening Knowledge Management. International Journal of Knowledge Management, 4(3).
Zyngier, S., & Burstein, F. (2009). Knowledge management governance as a mechanism for sustainable organizational knowledge creation and transfer. International Journal of Learning and Intellectual Capital, 6.
Zyngier, S. (2003). The role of information technology in knowledge management strategies in Australia: Recent trends. Journal of Information and Knowledge Management, 2(2), 165–178. doi:10.1142/S0219649203000061 Zyngier, S., Burstein, F., & Rodriguez, M.-L. (2003). Knowledge Management Strategies in Australia. In Hasan, H., & Handzic, M. (Eds.), Australian Studies in Knowledge Management. Woolongong, Australia: University of Wollongong Press.
316
Zyngier, S., Burstein, F., & McKay, J. (2004). Knowledge management governance: a multifaceted approach to organizational decision and innovation support. Paper presented at the Decision Support in an Uncertain World: Proceedings of the 2004 IFIP WG8.3 International Conference on Decision Support Systems (DSS2004), Prato, Italy. Zyngier, S., Burstein, F., & McKay, J. (2006). The Role of Knowledge Management Governance in the Implementation of Strategy. Paper presented at the 39th. Hawaiian International Conference on System Sciences (HICSS39), Hawaii.
317
About the Contributors
Murray E. Jennex is an associate professor at San Diego State University, editor in chief of the International Journal of Knowledge Management, co-editor in chief of the International Journal of Information Systems for Crisis Response and Management, and president of the Foundation for Knowledge Management (LLC). Dr. Jennex specializes in knowledge management, system analysis and design, IS security, e-commerce, and organizational effectiveness. Dr. Jennex serves as the Knowledge Management Systems Track co-chair at the Hawaii International Conference on System Sciences. He is the author of over 100 journal articles, book chapters, and conference proceedings on knowledge management, end user computing, international information systems, organizational memory systems, ecommerce, cyber security, and software outsourcing. Dr. Jennex conducts research for the National Center for Border Security Issues on risk management and technology integration. Dr. Jennex is a former US Navy Nuclear Power Propulsion officer and holds a BA in chemistry and physics from William Jewell College, an MBA and an MS in software engineering from National University, an M.S. in telecommunications management and a PhD in information systems from the Claremont Graduate University. Dr. Jennex is also a registered professional mechanical engineer in the state of California and a Certified Information Systems Security Professional (CISSP) and a Certified Secure Software Lifecycle Professional (CSSLP). Stefan Smolnik is an assistant professor of information and knowledge management at EBS University of Business and Law, Germany. He holds a doctoral degree from University of Paderborn/Germany. Before joining EBS, he worked as a research and teaching assistant at this university’s Groupware Competence Center. Stefan Smolnik has done research on the success and performance measurement of information and knowledge management systems, which has included several benchmarking studies. In addition, he is interested in the successful organizational implementation of social software. His work has been published in well reputed international journals and conference proceedings such as the Business & Information Systems Engineering journal, the International Journal of Knowledge Management, the Business Process Management Journal, the Proceedings of the Annual Hawaii International Conference on System Sciences, and the Proceedings of the Annual International Conference on Information Systems. *** Derek Ajesam Asoh, PhD is an Assistant Professor in the School of Information Systems and Applied Technologies, College of Applied Sciences and Arts, Southern Illinois University Carbondale. He holds a PhD in Information Science (inter-disciplinary) from the University at Albany (SUNY) New
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
About the Contributors
York. His research interests include data mining, educational technologies, entrepreneurship, health informatics, knowledge management, systems management technologies, and statistical modeling. His publications have appeared in Health Care Management Review and Methods of Information in Medicine. His research has also been presented at several conferences including Hawaii International Conference on Systems Sciences, Information Resources Management Association, Organizational Systems Research Association, and Systemic, Cybernetics and Informatics. Dr. Asoh currently teaches application development environments, computing in business administration, and database processing. His industry experiences include working as consultant and project coordinator for a number of international organizations, including the United Nations Office for Project Services, United Nations Development Program, and United Nations Economic Commission for Africa. Vittal S. Anantatmula’s research is focused on integrating knowledge management and project management, knowledge management effectiveness, project management performance, and leadership. Dr. Anantatmula is an Associate Professor and the Director of Graduate Programs in Project Management in the College of Business, Western Carolina University. Dr. Anantatmula has more than ten publications in journals such as Journal of Knowledge Management, International Journal of Knowledge Management, Journal of Information and Knowledge Management Systems (VINE), International Journal of Knowledge and Learning, and Project Management Journal. He has co-authored two books on project management. Dr. Anantatmula has presented more than 20 papers in prestigious and international conferences. Prior to joining Western Carolina University, he was with the George Washington University teaching and directing a graduate degree program. Dr. Anantatmula has worked in the petroleum and power industries for several years as an electrical engineer and project manager. As a consultant, he worked with the World Bank, Arthur Andersen, and other international consulting firms. Dr. Anantatmula holds B.E. (Electrical Engineering) from Andhra University, MBA from IIM-MDI, MS and D.Sc. in Engineering Management from the George Washington University. He is a certified Project Management Professional and Certified Cost Engineer. Salvatore Belardo, PhD is Associate Professor of Management Science and Information Systems at the University at Albany. He holds a bachelors degree in mechanical engineering and MBA, Masters, and PhD in Management Information Systems. Professor Belardo has been a visiting professor at the Copenhagen School of Business, the University of Passau (Germany), University of Del Salvador (Argentina), DUXX Graduate School of Business Leadership (Mexico), and Zurich Graduate School of Business Administration (Switzerland). He has published in Management Science, Decision Sciences, IEEE Transactions on Systems Man and Cybernetics, and the Journal of Management Information Systems; and has been recognized as one of the most prolific authors of decision support systems-related research. Interfaces Journal has recognized him as one of the top ten most cited authors. Dr. Belardo edited Simulation in Business and Management, and is co-author of Trust: The Key to Change in the Information Age, and Innovation Through Learning: What Leaders Need to Know in the 21st Century. Johanna Bragge holds a PhD in Management Science from the Helsinki School of Economics, and currently acts there as a professor of Information Systems Science. In her dissertation she applied decision and negotiation analytic methods while pre-mediating an escalated dispute regarding energy taxation in Finland. Dr. Bragge is the coordinator and main facilitator of HSE’s Electronic Decision-making and Groupwork Environment. Her current research interests include themes related to e-collaboration,
318
About the Contributors
digital marketing, and text mining. She has published, among others, in Journal of the Association for Information Systems, IEEE Transactions on Professional Communication, Group Decision and Negotiation, and European Journal of Operational Research. Rodrigo Baroni de Carvalho is a professor in the Master in Business Administration of Fumec University, Minas Gerais, Brazil. He has a PhD in Information Science from the Federal University of Minas Gerais (UFMG). Part of his PhD was done at the Faculty of Information Studies, University of Toronto, Canada with the supervision of professor Chun Wei Choo. His master degree was in Information Science and the bachelor degree was in Computer Science both from UFMG. Before being a full-time professor, he has worked for 16 years as system analyst and IT project manager mainly in the financial industry. His main research interests are knowledge management, KM software, ERPs, portals, technology acceptance, software engineering and information science. Gregorio Martín de Castro is Assistant Professor of Business Administration at University Complutense de Madrid, Spain. He is also a Research Associate at the CIC Spanish Knowledge Society Research Centre. Professor Martín de Castro holds an expert Diploma in Intellectual Capital and Knowledge Management from IUEE and Insead, France, and he was a Fellow at Real Colegio ComplutenseHarvard University from 2004 to 2005, and he was also a Fellow at Manchester Institute of Innovation Research (University of Manchester) in 2009. He is author and co-author of several papers and books on the Resource-Based View, Intellectual Capital and Knowledge Management. Jakov (Yasha) Crnkovic, PhD is an Associate Professor and Chair of the ITM Department, University at Albany (SUNY), New York. He completed his education at the University at Belgrade, Yugoslavia. His post-doc activities were in CERN (Geneva, Switzerland) and Middlesex University (London, GB). Prior to joining SUNY, he was professor at the University of Miami, Florida, University at Belgrade (Yugoslavia), College of Saint Rose (Albany, New York). He was leading project manager and team member in many projects for The Traffic Research Institute and the Faculty of Economics Research Institute (Belgrade, Yugoslavia.) His research interests: DSS, OR/OM, Knowledge Management, BPM, and IT education. He has published over 30 journal papers, authored and co-authored 18 textbooks (in Serbian and English languages), published 15 chapters in various IS/IT books, and presented over 40 refereed conference papers. He is visiting professor, he participated in research projects and consulting activities in many countries. David T. Croasdell is an Associate Professor of Management Information Systems in the Accounting and Computer Information Systems Department at the University of Nevada, Reno. He has a Bachelor of Science degree in Zoology, a Master of Science degree in Business Computing Science and a Doctorate of Philosophy in Management Information Systems. Dr. Croasdell’s research interests are in Distributed Knowledge Systems, Knowledge Networks, Knowledge Management, Organizational Memory, and Inquiring Organizations. He has over 40 publications in a wide variety of outlets. Before embarking on his academic career, Dr. Croasdell worked at Los Alamos National Laboratory where he managed a computer based training laboratory and supervised computer assisted software engineering efforts across multiple local area networks. While at Los Alamos he held a number of positions. Among his posts were two positions in the Environment and Earth Sciences Division where he developed training programs for safety and environmental protection.Xiaodong Deng is an Associate Professor
319
About the Contributors
of Management Information Systems at Oakland University. He received his PhD in Manufacturing Management and Engineering from The University of Toledo. His research has appeared in Journal of Management Information Systems, Decision Sciences, Information and Management, Information Resources Management Journal, and Journal of Intelligent Manufacturing. His research interests are in post-implementation information technology learning, information systems benchmarking, and information technology acceptance and diffusion. William J. Doll is a Professor of MIS and Strategic Management at the University of Toledo. Dr. Doll holds a doctoral degree in Business Administration from Kent State University. He has published extensively on information system and manufacturing issues in academic and professional journals including Management Science, Journal of Management Information Systems, Communications of the ACM, MIS Quarterly, Academy of Management Journal, Decision Sciences, Journal of Operations Management, Information Systems Research, Omega, and Information & Management. Natalie Egger is project assistant at the Austrian Federal Ministry of Finance. Her main activities are in the field of e-Procurement and she is responsible for the project administration of the EU-project PEPPOL (Pan European Public Procurement Online). She also deals with cross-organizational processes and e-Government topics. Cid Gonçalves Filho is a professor in the Master in Business Administration of Fumec University, Minas Gerais, Brazil. He has a PhD in Administration from the Federal University of Minas Gerais, Brazil. He was visiting Professor at Massachusetts Institute of Technology (MIT), USA. His master degree was in Information Science (UFMG) and his graduate area was Electric Engineering (UFMG). He is the chief editor of Revista FACES, an English-Portuguese academic journal dedicated to management studies. His main research interests are marketing, product development, knowledge management, CRM and innovation. Kerstin Fink is University Professor for Information Systems and dean of studies at the University of Innsbruck - School of Management. Kerstin Fink conducts research in the field of knowledge management and measurement with special focus on small and medium-sized enterprises. She was visiting researcher at Stanford University and the University of New Orleans and is currently guest professor at the University of Linz. Kerstin Fink was awarded with the Tyrolean Chamber of Commerce Prize, the Otto-Beisheim Prize and the Innsbruck Scientific Award for excellent research in the field of Knowledge Management. Ronald Freeze is an Assistant Professor of Information Systems in the Accounting and Information Systems Department at Emporia State University in Kansas. He received his Ph.D. from Arizona State University. His current research interests include Knowledge Management, Capability assessment and SEM modeling. Ron’s emphasis in his research is the measurement and validated contribution of knowledge to organizational performance. Ron teaches Object Oriented Programming, Micro-Computing Applications, Software Analysis & Design and Business Computing at the undergraduate level. His publications have appeared in the Journal of Management Information Systems and Journal of Knowledge Management. Ron has also presented and had proceedings published from the ACIS, ECIS, AMCIS, ICIS and HICSS international conferences.
320
About the Contributors
Doris Ipsmiller is CEO of m2n – consulting and development gmbh. She founded the company in 1999 while being staff member of the Johannes Kepler University in Linz. Apart from widespread project experience, primarily in the public and industry sector, Doris Ipsmiller has lectured on topics of knowledge management and knowledge organisation in academic institutions like the Johannes Kepler University Linz and the University of Applied Sciences, Berlin. She has held speeches at various events and conferences, has co-organized special tracks and tutorials at diverse conferences and has published titles on topics concerning agile business process development, applied ontology management and ontology based application development. George Leal Jamil is a professor in the Master in Business Administration of Fumec University, Minas Gerais, Brazil. He has a PhD in Information Science from the Federal University of Minas Gerais (UFMG). His master degree was in Computer Science (UFMG) and his graduate area was Electric Engineering (UFMG). He wrote 13 books in the information technology and strategic management areas. Yearly, he manages the doctoral consortium of the International Conference on Information Systems and Technology Management at the University of Sao Paulo (USP). His main research interests are information systems management, strategy, knowledge management, software engineering, marketing and IT adoption in business contexts. Shivraj Kanungo’s research focuses on evaluating and assessing IT value in organizations, software process improvement, and the relationship between organizational culture and IT value. He is presently Associate Professor of Management Science at the George Washington University. Previously, he held the Dalmia Chair in Management of Information Technology at the Indian Institute of Technology at Delhi. He consults extensively with industry and has published his research in leading journals. His books include CMMI Implementation: Embarking on High Maturity Practices (Tata-McGrawHill, with A. Goyal), Making Information Technology Work (Sage), Computer and Network Technologies and Applications, (Tata-McGrawHill, with B. N. Jain) and Information Technology at Work: A Collection of Managerial Experiences (HPC). Journals that have published his research include System Dynamics Review, Decision Support Systems, European Journal of Information Systems, Strategic Information Systems, International Journal of Information Management, International Journal of Information Systems, Software Process: Improvement and Practice, Information Technology and People, Computers in Human Behavior, Systems Research and Behavioral Science, and International Journal of HumanComputer Interaction. Dr. Kanungo earned his integrated bachelor’s and master’s degree (Master of Management Studies, 1986) from Birla Institute of Technology and Science, Pilani, India, the M.S. degree (1988) in Management Information Systems from Southern Illinois University at Edwardsville, IL and the Ph.D. degree (1993) in Information and Decision Systems from The George Washington University, Washington DC. Hannu Kivijärvi is a professor in Information Systems Science at the Helsinki School of Economics. He received his PhD in Management Science. His research interests include knowledge management, business – IT alignment, decision support systems in financial, production and marketing planning, IT Governance, and investments in information systems. His publications have appeared in a number of journals, including European Journal of Information Systems, European Journal of Operational Research, Journal of Decision Systems, Decision Support Systems, Managerial and Decision Economics, International Journal of Production Economics, and Interfaces.
321
About the Contributors
Uday Kulkarni is an Associate Professor of Information Systems at the W. P. Carey School of Business at Arizona State University. He got his Ph.D. from the University of Wisconsin-Milwaukee. Professor Kulkarni teaches graduate courses in Business Intelligence and Business IT Strategy and has received several teaching awards. His research interests lie in the area of knowledge management – metrics development and assessment, decision-making support using data and knowledge based systems, and application of knowledge based/AI techniques to business processes. His research has appeared in journals such as IEEE Transactions on Knowledge and Data Engineering, IEEE Transactions on Software Engineering, Decision Support Systems, Decision Sciences, European Journal of Operations Research, and Journal of Management Information Systems. José Emilio Navas López is Professor of Business Administration at University Complutense de Madrid, Spain. He is author and co-author of several books and papers on Technology Management, Strategy, the Resource-Based View, Intellectual Capital and Knowledge Management. He held the first Knowledge Management Chair in Spain at I.U. Euroforum Escorial. Josef Makolm is head of IT-Audit in the Directorate General for Information Technology at the Austrian Federal Ministry of Finance. He has over 30 years of experience in research, consulting and managing projects. His main activities and responsibilities are topics in e-Government, e-Taxation, e-Participation, e-Procurement, Knowledge Management, Interoperability and Multiple Use. He has published articles and books on these topics. He is member of the board of the Austrian Computer Society, co-leader of the Forum e-Government, head of workgroups in the Forum e-Government and the Austrian BLSG-Cooperation and lecturer at Danube University Krems. At present, he is project manager of the research and of the awarded use case project DYONIPOS (Dynamic Ontology Based Integrated Process Optimization) and the program leader of the Austrian part of the EU-project PEPPOL. He is responsible for Workpackage 2 “Virtual Company Dossier” - a project for borderless collection of business certificates and attestations. Shahnawaz Muhammed is Director of Innovative Learning Center and Assistant Professor at the School of Business, The American University of Middle East, Kuwait. Dr. Muhammed teaches information systems and operations management courses in his current position. His research interests include knowledge management, knowledge representation, information systems for knowledge management and knowledge management in supply chains. He has previously taught at Fayetteville State University, USA. He holds B.Tech. in Mechanical Engineering from the University of Calicut, India and PhD in Manufacturing Management from The University of Toledo, USA. His prior work experience includes engineering design, software development and software testing. He is a Certified Supply Chain Professional (CSCP) by APICS. Kevin J. O’Sullivan is an Assistant Professor of Management at New York Institute of Technology. He has over 16 years of experience IT experience in multinational firms and consulting both in the private and public sector in American, Middle Eastern, European and Far Eastern cultures. Dr. O’Sullivan has delivered professional seminars to global Fortune 100 organizations on subjects such as global collaboration, knowledge management, information security and multinational information systems. His research and development interests include knowledge management, intellectual capital security and information visualization. He serves on the editorial board of the Journal of Information
322
About the Contributors
and Knowledge Management and as associate editor of VINE: The Journal of Information and Knowledge Management Systems. Lorne Olfman is Dean of the School of Information Systems and Technology at Claremont Graduate University, Fletcher Jones Chair in Technology Management, and Co-Director (with Terry Ryan) of the Social Learning Software Lab (SL2). His research interests are in designing effective collaboration, learning and knowledge management technologies. To this end, Lorne and his SL2 colleagues are conducting research on a variety of topics including the design of an intelligent online discussion board, the development of an integrated set of tools to facilitate “The Claremont Conversation for the 21st Century, and the design of a virtual dialogue system. Lorne has been integrating the use of wiki technology into his research and teaching for the past couple of years. Alexander Orth works as a consultant with Accenture focusing on the Financial Services industry. His areas of expertise cover – among others – Business & IT Strategy, IT Transformation, Post-Merger Integration, Organizational and Human Performance, Performance Measurement as well as IS & IT Success. He holds a Master degree in Business Administration from European Business School (EBS), Wiesbaden/Oestrich-Winkel, Germany. Mr. Orth has been working within the research field of Knowledge Management throughout his studies; he has particularly focused on Knowledge Management Success and Success Measurement. His work has been published in the Proceedings of the Annual Hawaii International Conference on System Sciences 2009. Elsa Rhoads, D.Sc., is the Knowledge and Performance Architect for the Pension Benefit Guaranty Corporation (PBGC), a civilian federal agency in Washington, DC. Prior to her appointment in 2000, Rhoads was a Branch Chief in the Information Resources Management Department. Rhoads is a Board member of the Knowledge Management Working Group of the Federal CIO Council. Before joining PBGC in 1993, Rhoads enjoyed a career in management and information technology consulting in Chicago. She was also the founder and president of Rhoads Group, an IT consulting and software development organization. Rhoads holds an M.P.A. degree, and served as an adjunct professor in the Public Administration program at Roosevelt University, Chicago. Vincent M. Ribière, After teaching for the past 10 years at American University (Washington, DC) and later on at the New York Institute of Technology (NYIT) in New York and in the Kingdom of Bahrain, Vincent is now the Managing Director of the South Asian branch of the Institute for Knowledge and Innovation (IKI) of Thailand hosted by Bangkok University (http://iki.bu.ac.th) as well as an Assistant Professor at the Graduate School of Bangkok University. Vincent received his Doctorate of Science in Knowledge Management from the George Washington University, and a PhD in Management Sciences from the Paul Cézanne University, in Aix en Provence, France. Vincent teaches, conducts research and consults in the area of knowledge management, innovation management and information systems. Over the past years, he presented various research papers at different international conferences on knowledge management, organizational culture, information systems and quality as well as publishing in various refereed journals and books. Pedro López Sáez is Assistant Professor of Business Administration at University Complutense de Madrid, Spain, and he was a Fellow at Real Colegio Complutense-Harvard University from 2004 to
323
About the Contributors
2005. Professor López Sáez is also a Research Associate at the CIC Spanish Knowledge Society Research Centre and he is author and co-author of several papers and books on the Resource-Based View, Intellectual Capital and Knowledge Management. Michael Stankosky, DSc, is Professor of Systems Engineering, Lead Professor of Knowledge Management, and co-founder and co-director of the Institute of Knowledge and Innovation at the George Washington University. In those capacities, he oversees the research and education of all academic activites relating to Knowledge Management and Innovation. He collaborates with 12 adjunct faculty, 25 doctoral researchers, as well as with numerous scholars and practitioners from around the world. His latest book, Creating the Discipline of Knowledge Management, summarizes some of these efforts. Miriam Delgado Verde is Assistant Professor of Business Administration at University Complutense de Madrid, Spain. She was a Fellow at Manchester Institute of Innovation Research (University of Manchester) from 2008 to 2009. She is author and co-author of several papers on the Resource-Based View, Intellectual Capital and Technological Innovation. Silke Weiß is project assistant at the Austrian Federal Ministry of Finance. She is responsible for the analysis of information, communication and transaction processes as well as participative stakeholder integration on the basis of web 2.0 technologies for the development of new e-Government structures. She is assistant leader of the workgroup “organization” in the Forum e-Government of the Austrian Computer Society. Suzanne Zyngier PhD is a Senior Lecturer and is the Executive Director Masters of Business Information Management & Systems in the School of Business, Latrobe University, Australia. She has held academic appointments at Monash University and Swinburne University of Technology. Her research centers on the governance of knowledge management strategies. This has resulted in the development of a KM governance framework detailing the relationship of governance to the effective implementation of KM processes, and in defining the roles and tasks involved at each point of governance: planning and development of the KM strategy and the implementation of that strategy. Prior to joining academia her previous career was as an experienced knowledge management and information services analyst where she conducted her own business as a consultant to the professional, corporate and not-for profit sectors. Since 2001 her research has been concerned with knowledge management understandings and practices in the corporate and government sectors, and includes comparative study of knowledge management status between Australian and European financial services institutions. Suzanne has published her work in “Information Systems Frontiers”, the “International Journal of Knowledge Management” and in “Information and Knowledge Management”. She has also published several technical reports, several book chapters and has presented papers on her research at international conferences and to industry.
324
325
Index
A allowing for mutual influences (AI) 201 assimilative knowledge 216 authority 51, 52, 54, 55, 56, 58, 59, 60, 61, 62, 65, 68, 71, 72
B balanced scorecard metric 153 Belardo’s matrix 154, 155, 156, 157, 159, 169 business capital 181, 187 business environment 32, 33 business processes 262, 263 business strategy 52, 53, 56
C clarifying mutual expectations (CE) 201 codification 192, 193, 195, 196, 197, 198, 199, 200, 202, 203, 205, 206, 207 cognitive components 15 cognizant enterprise maturity model (CEMM) 129, 148 collaboration 213, 214, 215, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 233, 234, 235, 236 collaboration centered strategy 213, 215, 223 collaboration engineering (CE) 213, 215, 220, 222, 223, 224, 225, 227, 228, 229, 231, 233, 235, 237 collaboration engineers 235 collaboration processes 230, 231, 235 collaborative working culture 262 company valuation 14 competitiveness 32, 33, 48 competitor knowledge management 32, 35
conceptual knowledge 106, 107, 109, 111, 113, 114, 117, 119, 120, 121, 123 conditional knowledge 216 confirmatory factor analysis (CFA) 130, 136, 137, 138, 142, 145 content meaning 277 contextual knowledge 106, 107, 109, 111, 113, 114, 117, 120, 121, 122, 123 corporate culture 193 creativity 112, 123 critical knowledge 278 critical success factors (CSF) 2, 7, 8, 9, 10, 35, 108, 109, 153, 155, 164, 169, 170, 243, 244, 246, 248, 249, 250, 251, 252, 255, 257 cross-validation 151, 152, 160, 161, 162, 164, 168 cross-validation nomological network 161 cultural knowledge 216 culture 154, 155, 160, 262, 263, 264, 265, 266, 268, 270 customer knowledge management , 32, 35
D data 130, 132, 134, 137, 138, 139, 141, 142, 144, 149, 262, 263, 265, 266, 267, 268 declarative knowledge 216 DeLone and McLean (D&M) IS success model 3, 17 descriptive knowledge 216 DYONIPOS (Dynamic Ontology based Integrated Process Optimisation) research project 277, 279, 280, 281, 282, 283, 284, 285, 286, 287
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
Index
E economic value added metric 153 ELAK (electronic records) 284, 285 emergent strategy 53, 55 evaluation 55, 68 expertise 130, 132, 133, 136, 138, 140, 142, 143, 146, 147, 149 exploitation 33 exploration 33
F facilitators 230, 231, 234, 235 firm performance measures 5
G governance 51, 53, 54, 55, 58, 60, 62, 63, 65, 66, 67, 68, 72 group support systems (GSS) 214, 215, 221, 222, 223, 224, 225, 227, 228, 229, 230, 231, 235
H Hawaii International Conference on System Sciences (HICSS 2006) 1, 2, 14 Hawaii International Conference on System Sciences (HICSS 2007) 1, 12, 13 Hawaii International Conference on System Sciences (HICSS 2008) 1 HICSS knowledge management foundations workshop 14 holistic KM view 238, 257 human capital 130, 131, 132, 133, 134, 135, 146, 179, 180, 181, 182, 184, 186, 188
I index values (IV) 242 individual innovativeness 106, 110, 112, 113, 114, 115, 120, 123 individual knowledge 107, 109, 110, 113, 121, 216 individual performance 107, 108, 112, 122 information quality 17, 18 infrastructure 264, 265, 267, 269, 270
326
innovation 32, 33, 34, 35, 38, 39, 40, 41, 42, 44, 45, 46, 47, 48, 106, 107, 109, 110, 112, 113, 116, 117, 120, 121, 123, 124, 125 intangible asset monitor metric 153 intangible assets 128, 129, 131 integrated KMS architecture 257 integrated KMS perspective 257 integrated measurement system 257 intellectual assets 58, 70 intellectual capital 91, 93, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 216 intellectual capital-based view (ICV) 179, 180, 187, 189 intellectual capital, resource-based view (RBV) of 179, 180, 181 Intellectus Model 181, 182 International Journal of Knowledge Management (IJKM) 4, 16, 30 interpretive structural modeling (ISM) 264, 266, 267, 268, 271, 274, 275 IT effectiveness 263
J Jennex and Olfman KM Success Model 3, 11 Jennex and Olfman's success assessment framework 239, 245, 246, 248, 249, 251 Journal of Management Information Systems (JMIS) 192
K Kaiser-Meyer-Olkin (KMO) index 184, 185 key performance indicators (KPI) 242, 243, 247, 249, 250, 251, 252, 255 KM academics 1, 2, 4, 5, 9, 10 KM capability 128, 129, 130, 134, 135, 146 KM capability improvement 128 KM, enablers of 264, 265, 266 KM governance 52, 53, 54, 55, 56, 57, 61, 64 KM initiatives 238, 239, 240, 243, 244, 245, 246, 252, 254, 255, 261, 262, 264, 268, 269, 270 KM initiative success 192, 194, 196, 197, 198, 199, 200, 201, 202, 205, 206, 207, 208, 212
Index
KM/KMS success 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11 KM/KMS success measurement 2 KM leadership 51 KM leadership through governance 51, 52, 53, 54, 55, 56, 57, 60, 61, 62, 64, 65, 67, 68 KM leadership through management 51 KM maturity models 56, 64 KM practitioners 1, 4, 5, 51, 57, 58 KM processes 238, 240, 241, 242, 243, 251, 252, 253, 254, 261 KM strategy 52, 53, 54, 55, 56, 58, 59, 61, 62, 64, 68, 69, 71, 72, 213, 215, 223, 224, 225, 226, 230, 238, 240, 241, 242, 253, 254, 255, 261 KM students 1, 4, 5 KM success 51, 52, 61, 107, 108, 109, 110, 112, 113, 120, 121, 122, 123 KM Success and Measurement minitracks 1 KM success as process measure 3 KM systems (KMS) 239, 240, 241, 242, 243, 244, 245, 246, 248, 249, 250, 251, 252, 253, 254, 255, 257, 261 KM technology 238 know-how 216 knowledge 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 213, 215, 216, 217, 219, 225, 229, 230, 231, 232, 233, 234, 235, 236, 262, 263, 266, 271, 272, 273 knowledge access 109 knowledge acquisition 278, 279 knowledge application 109, 278 knowledge assets 128, 129, 130, 131, 132, 137, 145, 146, 147, 149 knowledge audits 91 knowledge capabilities (KC) 128, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 142, 144, 145, 146, 147 knowledge capacity 216 knowledge capture 109 knowledge community 195 knowledge content 108, 109, 110, 122 knowledge creation 109 knowledge development 278 knowledge distribution 278
knowledge documents 130, 132, 133, 137, 138, 139, 142, 144, 146 knowledge evaluation 278 knowledge, explicit 15, 16, 52, 53, 65, 68, 72, 129, 131, 132, 133, 134, 181, 213, 215, 216, 217, 219, 263, 265, 277, 278 knowledge flows 129, 131 knowledge hierarchies 195, 209 knowledge identification 278 knowledge, implicit 277, 278, 282 knowledge integration 265, 266, 272 knowledge-intensive organizations 214 knowledge landscape 278 knowledge lifecycle 129, 130, 131, 132, 133, 134 knowledge management cycle 193 Knowledge Management Foundations workshop 1 knowledge management index (KMI) 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 165, 166, 167, 168, 169, 170, 176, 177 knowledge management (KM) 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 32, 33, 34, 35, 37, 38, 39, 40, 41, 42, 44, 45, 46, 47, 50, 91, 93, 94, 98, 100, 101, 104, 128, 130, 131, 136, 147, 148, 149, 192, 193, 195, 208, 209, 210, 211, 212, 214, 215, 216, 217, 218, 219, 223, 229, 230, 233, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 277, 278, 279, 282, 286 knowledge management (KM): direct intellectual capital (DIC) methods 93 knowledge management (KM): knowledge potential measurement method 91, 93, 94, 98, 100, 101, 104 knowledge management (KM): market capitalization methods (MCM) 93 knowledge management (KM): return on assets (ROA) methods 93 knowledge management (KM): scorecard (SC) methods 93
327
Index
knowledge management (KM) success 106, 107, 108, 109, 110, 112, 113, 120, 121, 122, 123, 124, 125, 126 knowledge management (KM) success models 14, 15, 17, 18, 19, 24, 28, 29, 30 knowledge management (KM) system success 238, 239, 244, 248, 255, 257 knowledge management performance index (KMPI) 153, 173 knowledge management processes (KMP) 153, 154, 155, 156, 157, 159, 160, 161, 168, 169, 170 knowledge management process model 277, 278 knowledge management success measures 106, 122 knowledge management systems (KMS) 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 14, 15, 16, 17, 18, 20, 22, 23, 25, 26, 27, 107, 108, 109, 122, 126, 193, 195, 205, 208, 264, 265, 271 knowledge markets 195, 209 knowledge mass 98, 104 knowledge measurement 91, 92, 93 knowledge measurement systems 91, 94 knowledge objects 131 knowledge organizations 91, 92 knowledge position 98, 100, 104 knowledge potential 91, 93, 94, 97, 98, 99, 100, 101, 102, 104 knowledge potential framework 91 knowledge quality dimension 18, 20 knowledge repositories 131 knowledge repository success (KRS) 197 knowledge sharing 109, 129, 134, 146, 147, 192, 193, 194, 195, 197, 198, 205, 206, 207, 210, 211, 219, 235 knowledge sharing intention 265 knowledge stocks 179, 180 knowledge storage 278 knowledge strategy 2 knowledge submission process 21 knowledge, tacit 15, 16, 21, 52, 53, 72, 91, 94, 132, 133, 134, 181, 193, 196, 198, 206, 213, 214, 215, 216, 217, 219, 224, 234, 236, 263, 265
328
knowledge transfer initiatives 214 knowledge velocity 91, 100, 104 knowledge workers 91, 92, 93, 94, 96, 97, 98, 99, 100, 101, 102, 104, 105, 277, 278, 279, 280, 281, 283, 284, 285, 286 know-what 216 know-when 216 know-who 216 know-why 216 KOMPASS database 285
L latent descriptor factors 130, 134 leadership culture 8 learning orientation 265 lessons learned 130, 132, 133, 135, 136, 138, 139, 142, 143, 144, 145, 146 linguistic knowledge 216
M management support 21, 22, 26, 27 market intelligence 34, 47 market knowledge 32, 33, 34, 35, 42, 47 market knowledge management 32, 34, 47 market knowledge models 32 market orientation 33, 34, 35, 37, 38, 39, 40, 41, 42, 46, 47, 48, 49 market orientation, behavioral perspective 34 market orientation, cultural perspective 34 market share 33, 34 market value added metric 153 measurement 53, 55, 66, 68, 264, 265, 270 meeting expectations (ME) 201
N new product performance 32, 38
O operational knowledge 106, 107, 109, 111, 112, 113, 114, 115, 116, 117, 120, 121, 122, 123, 125 organizational culture 184, 192, 193, 194, 195, 207, 238 organizational knowledge 33, 38, 48, 91, 94, 104, 107, 216, 217
Index
organizational learning (OL) 132 organizational memory 33 organizational memory information systems (OMS) 16, 17 organizational memory (OM) 15, 19, 23, 24 organizational performance 32, 34, 37, 47 organizational performance (OP) 150, 151, 152, 153, 155, 157, 158, 161, 162, 168, 169, 170, 178 organizational trust 192 organizational trust survey (OTS) 201, 202 outsourcing 266
P partial least squares (PLS) 161, 162, 165, 166, 169, 171, 172, 173 performance 106, 107, 108, 109, 110, 112, 113, 115, 116, 117, 118, 120, 121, 122, 123, 125, 127 personalization 192, 195, 196, 197, 198, 199, 200, 202, 203, 205, 206, 207 presentation knowledge 216 procedural knowledge 216 profitability 33, 49
Q quantum mechanical thinking 91, 93, 94, 95 quantum organizations 91, 94, 95, 96, 103, 105 quantum-relativistic paradigm 94, 95
R reasoning knowledge 216 reducing controls (RC) 201 relational capital 179, 180, 181, 182, 184, 185, 186, 187, 188 resource-based view of the firm 216 resource description framework (RDF) 281, 282 return on investment (ROI) 33, 51, 56, 68, 69 Riempp’s integrated KMS architecture 239, 240, 241, 249, 250, 251, 252, 253, 255 Riempp’s performance measurement system 239 risk management 55, 65, 68, 69
sales 33, 34 self-transcending knowledge 216 semantic linkage 277 service quality dimension 17, 21 sharing relevant information (SI) 201 Skandia navigator metric 153 small and medium-sized enterprises (SME) 91, 101, 102, 103, 105 social capital 181, 195, 197, 207, 208, 265, 272 social context 265, 266 social network analysis (SNA) 7, 8 strategic alliances 185, 187 strategic resources 179 structural capital 180, 181, 182, 184, 185, 186, 187, 188 structural equation modeling (SEM) 32, 49, 106, 115, 150, 153, 159, 160, 161, 162, 163, 164, 165, 168, 169
T tacit embodied knowledge 216 task knowledge 106, 107, 109, 110, 111, 112, 113, 114, 116, 118, 120, 121, 122, 123 taxonomies 131 technical components 15, 16 technology 262, 263, 264, 265, 269, 270 technology broker metric 153 technology knowledge management 32 ThinkLet technique 213, 230, 231, 233, 236 trustworthiness 192, 194, 195, 197, 198, 200, 201, 202, 205, 206, 207, 208, 209, 210, 211, 212
V virtual project teams 266
W Web ontology language (OWL) 281
S
329