The Emerald Research Register for this journal is available at www.emeraldinsight.com/researchregister
The current issue and full text archive of this journal is available at www.emeraldinsight.com/1460-1060.htm
Consumer perceived risk in successive product generations
Consumer perceived risk
Maria Sa¨a¨ksja¨rvi Swedish School of Economics and Business Administration, Helsinki, Finland, and
145
Minttu Lampinen School of Business Administration, University of Tampere, Tampere, Finland Abstract Purpose – The study examines the consumer perceived performance risk in successive product generations. Design/methodology/approach – The results are based on ten focus group interviews. We divide risk into two different levels based on its criticality (attribute and functionality) to be able to assess more than its mere presence in an innovation. Findings – The study shows performance risk to differ between generations representing different innovation levels, and that this risk is moderated by whether the consumer has usage experience of the original innovation. The results show that the risk consumers perceive is more critical in a modified successor than in an original innovation provided that consumers have usage experience of the latter one. Practical implications – This study has implications for companies aiming at reducing consumer perceived risk in innovative product launches. Originality/value – Perceived risk is an important construct in innovation adoption research. Although it has been used to measure and predict individual adoption patterns towards a single innovation, little research has examined its impact on successive product generations. The results offer both theoretical and practical implications. Keywords Innovation, Performance levels, Product management, Risk management, Consumer risk Paper type Research paper
Introduction Performance risk, “the uncertainty and adverse consequences of buying a product” (Dowling and Staelin, 1994) is an integral part of innovation adoption (Bauer, 1967). Innovations that are perceived as possessing a high degree of risk might be rejected despite their apparent benefits (Runyon and Steward, 1987). Especially in technological markets where uncertainty is high, the need to reduce performance risk and thus increase adoption intentions, is vital for firm success (Ziamou, 2002; Veryzer, 1998a, b; Glazer, 1995). Risk reduction is thus integral. However, means to achieve it might not be readily at hand for most firms. Few research efforts have been directed at comprehending the different levels of risk a product can pose (Ziamou, 2002). An exception is Ziamou’s (2002) study of performance risk in technological markets. She found that consumers find more uncertainty in innovations that do not provide consumers with new functionality, but rather change the interface of already existing functionality. Although separate products provide consumers with varying functionalities, products with new and existing functionality can also occur within the same product category. Product generations, which have been surprisingly
European Journal of Innovation Management Vol. 8 No. 2, 2005 pp. 145-156 q Emerald Group Publishing Limited 1460-1060 DOI 10.1108/14601060510594675
EJIM 8,2
146
neglected by previous research (Rogers, 1995), are sequential introductions of a product. The first generation of an innovation is called the original innovation and its predecessor the modified successor. The latter often updates the former by making it more timely and appealing without changing its core functionality. Product generations are common in technological markets where existing platforms are utilized to a number of products to cut costs and production times. It would be useful to know whether the assumptions established by traditional innovation adoption research would also exist between product generations. For example, does the fact that certain consumers have usage experience of the original innovation change their perceptions of product risk when encountering the modified successor? The purpose of this paper is to examine consumer perceived performance risk in successive product generations. It proposes the amount of risk present in an innovation to be dependent on the product generation and moderated by consumer expertise, i.e. whether consumers have experience of the original innovation or not when encountering the modified successor. This research contributes to previous literature by taking product generations into account. We also extend on the notion of risk by presenting it as consisting of two components, attribute- and functionality-related risk, which differ in their criticality. The paper is organized as follows. First, we discuss innovative product generations and examine their role in consumer perceived performance risk. Then, we examine consumer expertise and different levels of risk (attribute and functionality) present in new products. Next, we outline the methodology for the study conducted and present the results of the study. Finally, we present the conclusions and implications of our study. Innovations and product generations An innovation is an idea, product or piece of technology that has been developed and marketed to customers who experience it as being something new (Rogers, 1995, p. 45). Innovative product generations are sequential introductions of the same product in which later versions updates the predecessors by bringing new features or modifying their features to make the product more timely and appealing. The generations follow the same product concept, and consumers easily notice that the modified successor is built on the original innovation. For example, although a new laptop could provide consumers with a range of new features such as Flash card options and Wi-Fi roaming, consumers would easily recognize the laptop as a laptop. When a consumer encounters the modified successor, the original innovation is likely to be used as a comparison standard. According to categorization and analogical learning theory, consumers utilize existing knowledge to learn about new products (Gentner, 1989; Basu, 1993; Sujan, 1985; Fiske, 1982). That is, upon seeing a new product, consumers search for a schema match (Sujan, 1985). If a relevant schema is accessed, consumers can compare properties between the new product and the schema to see how it deviates from it. When differences are found, they are transferred to the existing schema as tags (Sujan and Bettman, 1987). For example, if a new mobile phone has video streaming that other mobile phones lack, this feature is added as a tag. Consumers have the possibility of aligning differences both on the basis of changes in attributes and functionality (Gentner, 1989; Gregan-Paxton and Roedder, 1997). We use attributes as a broad term, encompassing all product components consumers use
when interacting with the product to obtain a particular functionality. Attributal changes usually involve updating the product’s features (Gregan-Paxton and Roedder, 1997). For a handheld device, for example, these adjustments might involve changing the size of the screen, switching the watch on the monitor from a digital to an analog one, making the product lighter, smaller, or changing its shape. Attributes describe the different parts of the product, and knowledge about them does not necessarily tell the consumer how to use it. As such, products that only offer attributal changes are interface innovations. Functionality is the potential set of benefits a consumer receives from the product, and it occurs on a deeper level than mere changes in the product’s attributes (Markman and Gentner, 2001). It describes how different product components relate to each other to make the product function as it does and thus conveys information about product usage (Markman and Gentner, 2001). Examples include what button to press when deleting an e-mail, how to insert a calendar appointment, how to access games, or what to do to access the internet. Simply knowing that a device has internet access (attribute) does not tell the consumer how to access it (functionality). For example, numerous consumers have WAP enabled phones, but never use the services offered since they might not possess knowledge about how to access them. Products that introduce new functionality to the consumer are called functionality innovations. Innovation degree and risk Product evaluation differs based on whether the consumer knows the functionality of the product beforehand (Ziamou and Ratneshwar, 2003). If the product is a functionality innovation, consumers are likely to focus on the new functionality the product offers instead of its interface as it delivers a compelling advantage over other already existing products (Ziamou, 2002). If the changes in the product are on an attributal level (interface innovation), existing products providing a similar functionality are likely to be cued (Sujan, 1985). Further, consumers are likely to compare the features of the new interface with known and familiar interfaces that deliver the same functionality (Ziamou and Ratneshwar, 2003). Taking these results into an innovation generation context would imply that when consumers are exposed to the modified successor they are likely to compare it to a product that offers the same functionality. The best match would be its previous generation since it also matches on an attributal level (i.e. it looks similar) (literal similarity match, see e.g. Ratterman and Gentner, 1998). Thus, most consumers exposed to the modified successor would compare it to its predecessor. However, consumers who are exposed to the modified successor who are unaware of the original innovation are likely to compare it to other products with the same functionality. In this case, many suitable matches might be found. Ziamou (2002) found that when a functionality innovation is introduced, consumers perceive less uncertainty about its performance, since they focus on the unique benefits delivered by the product. In contrast, regarding an interface innovation, consumers give considerable attention to the interface that is discrepant from existing schemata. They question whether the product is likely to work as intended, thus perceiving higher uncertainty. Conversed to product generations, this would mean that consumers, in general, would perceive less uncertainty regarding the performance of the original innovation (functional innovation), and perceive more uncertainty in the
Consumer perceived risk
147
EJIM 8,2
modified successor (interface innovation). Since uncertainty is a part of risk, we expect in line with Ziamou’s predictions that: P1.
148
Consumers will perceive less performance risk regarding the original innovation than its modified successor.
Since products can have both attributal and functionality-related components, we propose that consumers can experience risk at both levels. By dividing risk into two groups (functional, more critical and attributal, less critical risk) we are able to elaborate on Ziamou’s hypotheses regarding interface and functionality innovations. Previous research shows that functional risks weigh heavier than interface related risks (Gregan-Paxton and Roedder, 1997; Goldstone et al., 1991), since the former pertains to the product delivering the promises it makes to the consumers. In contrast, attribute related risks are not as severe, as they do not prevent the product from providing consumers with the benefits it promises. When the original innovation is introduced, even tech savvy consumers are not familiar with the functionality of the product. Although consumers are likely to focus on the functionality of the original innovation, consumers that are not familiar with the functionality cannot elaborate on functionality-related risks, since these are unfamiliar to them. In contrast, attribute-related risks are easy to elaborate on since they are evoked upon seeing the product. Hence, P2.
Consumers emphasize attributal rather than functionality related risks regarding the original innovation.
Consumer expertise and risk When the second generation of the innovation is introduced, two distinct groups of consumers can be distinguished: those that have usage experience of the first innovation (product experts) and those who do not. For consumers that have usage experience the innovation represents merely an attributal change to the original innovation and they can readily compare the two. As such, for them it is an interface innovation. For novices that do not have usage experience of the original innovation, however, it represents a functionality innovation. Hence, according to Ziamou’s predictions, they should perceive less uncertainty about its performance than experts to whom it represents an interface innovation. However, if we take into account risk criticality, we obtain a different picture. As stated in P2, consumers that are not familiar with the functionality cannot elaborate on functionality-related risks since they are unfamiliar to them. As such, only experts are able to elaborate on functionality-related risks. Moreover, research on analogical learning shows that only experts know sufficiently about a product to be able to determine functionality-related risks (Novick, 1988; Novick and Holyoak, 1991). Novices often only realize attributal changes to a product since they are readily visible (Novick, 1988), and are not able to elaborate on whether the product functions as it should since they have yet to establish usage experiences with it. Alternative support for the notion that experts perceive the risk in the modified successor to be more critical can be found by examining experienced consumers’
reactions to new technology. When the original innovation that provides innovative functionality is first released, consumers that take an interest in technology are likely to be impressed (Moore, 1999). This “wow” effect might overshadow some of the flaws and consumers have learned not to expect truly new technological innovations to be perfect. The life cycle for technological products is short, and firms are under constant pressure to release them to the market as soon as possible. Over time, however, consumers might come to initiate a long list of possible product improvements, since they have hands-on experience of the product. When the modified successor is introduced, consumers that have experience of the original innovation compare it to the modified successor, and possible improvements now come to mind. If the modified successor does not accommodate them, it might also contribute to increased risk regarding the product’s performance. A point worth noting is that we do not claim the amount of risk itself to differ among experts and novices, only its criticality. Risk criticality, however, is imperative for consumer adoption decisions; it matters more if the product is failing to deliver one of its functions than if it has an attributal flaw. The amount of risk in numerical terms should be in line with Ziamou’s predictions (i.e. exceed that of the original innovation), which were already brought forward in P1. Hence, regarding the relation between expertise and product generation we propose the following: P3.
Experts and novices find similar amounts of performance risk in the modified successor, but
(1) experts emphasize functionality related risks, and (2) Novices emphasize attributal risks.
Methodology The propositions brought forward by this study were examined in two pre-market studies. A pre-market study entails showing an innovation to a selected group of potential customers before its commercial launch. Thus, the participants are exposed to the innovation for the first time. At the time of the study, the product was in its final stages of development. The product category of communicators was selected for this research, because the category offered a change to examine two innovative product generations. The Nokia 9000 Communicator was launched in 1996 in Europe. It was the first handheld device to combine a mobile phone and a personal digital assistant into one device with a full keypad and was thus a new functionality innovation. It had no antecedents and got the innovation of the year award 1996. The modified successor was the Nokia 9110 Communicator. It was launched in 1998 in Europe. Nokia 9110 Communicator was the first one in the industry to enable users to send and receive pictures via infrared. The Nokia 9110 Communicator is based on innovative modifications rather than on real innovative capabilities, when compared to the Nokia 9000, and can thus be considered an interface innovation. The research for Nokia 9000 was conducted in the year 1995 and the research for Nokia 9110 in the year 1998.
Consumer perceived risk
149
EJIM 8,2
150
Table I. Interview profile for the Nokia 9000 study
Table II. Interview profile for the Nokia 9110 study
A focus group interview was chosen as a suitable approach to study consumer evaluation of the communicators, since high-tech products’ status and observability often contribute to the fact that the consumer’s social environment will affect his/her decisions (Rogers, 1995). There were five group discussions conducted in two countries, in the UK and in Germany. These countries were selected, because the sales of the communicator first started in them. Since the groups and their results were similar in both countries, we do not aim to study the differences between them. Each focus group session lasted about two hours, and was video-recorded. The interviewer’s role was that of a moderator: to ensure that the main topics in the discussion guide were covered and to facilitate the discussion. The participants were encouraged to use their own words and freely state their opinions. Consumers with the same amount of product usage are likely to have the possibility to share their experiences within a group; they can challenge each other and point out contradictions in expressed views even if they do not know each other a priori (Farquhar and Das, 1999). The profile of the focus group participants of the Nokia 9000 study is shown in Table I. This profile represented that of a potential communicator user and was gained from Nokia for research purposes. The participants could be defined as early adopters of products. All the participants were between the ages of 25 and 45. They consisted of males and females, whose planned usage for the new product was either business or private. They either owned or intended to get a mobile phone or organizer in the next 12 months. Almost all of the interviewees used an organizer, and nearly all respondents had access to a PC either at home or at work. The group size varied from 7 to 10 participants in each group, which is the recommended size (McDonagh-Philp and Bruseberg, 2000; Morgan, 1997). The Nokia 9110 focus group participants’ profiles are displayed in Table II. Groups varied in size from 8 to 12 participants in each group. All the respondents had access to a PC either at home or at work. The existing users of Nokia 9000 were recruited from the Club Nokia database. Since communicators are only produced by Nokia, the non-users of the communicator cannot yet have usage experience of this kind of product.
Group Group Group Group Group
Group Group Group Group Group
1 2 3 4 5
1 2 3 4 5
Cellphone
Organizer
Sex
Own Own Intend 50 percent own 50 percent own
Own Intend Own 50 percent own 50 percent own
Male Male Male Female Mixed
Cellphone
Organizer
Sex
Own Own Intend 50 percent own Own
50 percent own 50 percent own Own 50 percent own Nokia 9000
Male Female Male Male Male
Results We analyzed the transcribed data in two stages. First, all statements relating to risk were extracted. Then, those relating to performance risk were separated into attribute and functionality related risks. We followed the definitions of attributes and relations when coding the different risk categories (Gregan-Paxton and Roedder, 1997). More specifically, a separate product component that was perceived as risky by participants was considered an attributal risk, whereas a statement expressing how different product components related to each other and created risk was coded as a functionality related aspect. One author coded the complete data into separate categories, and the other author double-checked the first author’s coding. Disagreements were resolved by discussion. The results of the study are reported in the following manner. First, we discuss the original innovation (Nokia 9000). Then, we examine the results regarding the modified successor (Nokia 9110) first regarding experts and then regarding novices. Regarding each innovation, the results regarding attributal and functionality related risks are shown in a table with illustrative quotes to show the reader the respondents’ exact wording. Finally, we examine the suggested propositions.
Consumer perceived risk
151
The original innovation: Nokia 9000 communicator A summary of consumer perceived interface and functionality risk regarding the original innovation (the Nokia 9000) is displayed in Table III. As can be seen from the table, two interface issues relating to the Nokia 9000 Communicator were the size of the keys and their labeling on the keyboard. The buttons of the communicator were considered to be too small. Respondents also thought that some of the features of the communicator could have been left out, such as e-mail and SMS. Consumers also felt that the screen on the communicator was small. The functionality risks consumers perceived with the Nokia 9000 concerned Windows compatibility and the battery. The lack of Windows operating system compatibility was perceived as a risk, since many respondents used Windows on a daily basis. Respondents also needed reassurance regarding what will happen if the battery drains out. Other information of interest to them was multi-tasking; consumers needed reassurance that documents can be viewed during phone calls and that a hands-free option is available while driving. Illustrative quotes Attribute Size of keyboard E-Mail/SMS Screen Functionality Windows compatibility Battery
“You cannot type properly”. “No alternative to a laptop”. “No Real Use”. “Can do it on a fax”. “I’d prefer to use the PC”. “The screen size is abit small for internet usage”. “The missing Windows compatibility would make me stay away from that product”. “If the batteries are finished everything would be lost”.
Table III. Nokia 9000: perceived performance risk
EJIM 8,2
152
The modified successor: Nokia 9110 communicator Experts. The performance risks expressed by consumers that had usage experience of the original innovation (experts) are summarized in Table IV. In general, experts raised few interface issues. They were not too concerned with the size of the keyboard; it was seen as sufficient given the actual amount of typing they did. They had defined their own usage occasions and perceived a trade-off between the size of the keyboard and the benefit of having a compact, all-in-one communications tool. As can be seen in Table IV, some users felt that the size of the keyboard was not an improvement compared to the original communicator. Experts, however, raised many functionality-related concerns (see Table IV). They felt that the product was difficult to use, and that it was lacking a consistent interface; you had to go though many steps to complete a whole function. They also felt that the fax functionality should be more elaborate, since you cannot print faxes from the communicator, and if you want to make sure that the fax was really sent you have to go through a number of unnecessary steps and the communicator will try to re-send it. The loading port was also perceived as difficult. They also felt that the battery should last longer, and the device should have more memory considering all the functions it contains. Novices. A summary of the performance risks expressed by consumers with no usage experience of the original innovation (novices) is shown in Table V. Consumers that did not have experience of the first generation (novices) raised many interface concerns regarding the Nokia 9110 communicator (Table V). They
Illustrative quotes Attribute Size of keyboard Manual Functionality Usage
Fax
Printing Loading port Table IV. Nokia 9110: perceived performance risk among experts
Battery Memory
“I prefer the existing (9000) front keypad”. “I can’t understand why the keys are so small with such a large space below”. “The manual could be better”. “To get from one menu to another you have to press ‘abort’ several times and press another button-you can’t answer directly without any problems. For e-mail you think is sent you have to go to the exit basket file and press buttons there, it might abort again, you have to click away another error message and restart it – it takes far too long”. “If you want to make sure that the fax was really sent you have to go through a number of unnecessary steps and the communicator will try to re-send it” “You cannot pront faxes from the communicator” “The existing loading port is a disaster. . . it’s never with you need it but it is too sensitive to leave on the machine. On a good phone it’s built in”. “The battery should last longer”. “It does not have enough memory considering all of the function it performs”.
Illustrative quotes Attribute Fashionability Forms of communication Size of keyboard Weight Screen Functionality Input format Usage
“If I would be looking at phones I’d think that’s the old one”. “It offers an excessive amount of communication”. “I doubt I would have use for SMS”. “If you would get the pen-facility on that . . . I think it would be easier to type on the thing.” “It is too heavy” “The screen is small”. “The smaller the screen gets the worse it gets”. “I would not be using it for a long time . . . 10 minutes, 20 minutes”. “I would try to use my PC and use that as a back-system of a PC”.
thought it was heavy, that the screen was small, especially if you wanted to look at spreadsheets or perform more demanding functions. They also felt that it might offer an excessive amount of communication, and doubted that they would have use for SMS. Many participants (especially women) also commented on the look of the phone; they felt it looked old fashioned. The size of the keyboard was also a real concern, as can be seen in Table V. Respondents felt that it was too small, and that you could not type properly with it. The size of the keyboard was not merely an attributal concern. Respondents felt that it would affect the functionality of the device. They also had concerns over whether the Nokia 9100 would actually function properly both as a mobile phone and a computer. Based on the above-presented results, we can now examine our propositions. P1 suggested that consumers will feel less risk regarding the original innovation than its modified successor. In general, both experts and novices raised more issues related to performance risk regarding the modified successor than the original innovation (compare Tables V and IV to Table III); this proposition is hence supported. P2 put forward that consumers would emphasize attributal rather than functionality related risks regarding the original innovation. Overall concerns regarding the original innovation’s performance were few. They dealt with the size of the keyboard and the screen, and whether consumers would in fact need both SMS and e-mail on the device (Table III). Consumers were also worried that the functionality of the product would be hampered by a lack of Windows compatibility, and were concerned about its battery life, since the whole product is rendered obsolete if the battery does not function properly. As such, the main concerns regarding the original innovation were attributal. P3 advocated that while experts and novices perceive similar amounts of risk regarding the performance of the modified successor, experts would emphasize functionality-related risk, whereas novices would focus on attributal risks. Overall, without regards to the criticality of the risk involved, experts and novices perceived similar amounts of performance risk in the modified successor. If risk criticality is taken into account, however, the situation changes. Experts clearly compared the
Consumer perceived risk
153
Table V. Nokia 9110: perceived performance risk among novices
EJIM 8,2
154
modified successor (the Nokia 9110 communicator) to the original innovation (the Nokia 9000 communicator), of which they had usage experience. They also seemed to transfer negative beliefs that they had about the original innovation to the modified successor, especially if they found that improvements that would have made a positive contribution to the product’s performance had not been taken into consideration. They felt that most product attributes were fine, but pointed out several functionality related concerns such as the difficulty of using the loading port and the fax, and the lack of a consistent interface that makes the product less user friendly than it could be. They also felt that the battery does not last long enough, and that the product has insufficient memory. In contrast, functionality related concerns for novices regarding the modified successor were few, and dealt with issues such as whether it in fact would function both as an organizer and as a mobile phone. The input format of the product was also a concern. Attributal concerns were more common. They ranged from the product’s weight, to its screen and keyboard size, its fashionability, and whether consumers would need all the features brought forward by the product. Discussion and implications The purpose of the paper was to examine consumer perceived performance risk in successive product generations. It treated risk as consisting of differing degrees and proposed expertise to be a mediator between the differing amounts of risk perceived between product generations. In summary, consumers perceived less risk regarding the original innovation than the modified successor. They were impressed by the original product’s innovative functionality and found it technologically advanced. Since it was new, consumers had yet to learn about its functionality associated risks. Regarding the modified successor, however, experts knew what improvements they would have liked to see in it. If these had not been implemented, experts considered it to encompass more functionality-related risk than the original innovation. Consumers that had not been exposed to the original innovation (novices), however, perceived more attributal risks regarding the modified successor. Companies manufacturing successive product generations should take note of the results of this study. They should communicate to consumers that have usage experience of the original innovation (experts) what functionality-related aspects are changed between the original innovation and its modified successor, whereas aiming at reducing attributal risks for consumers with no previous usage experience (novices). Regarding the original innovation, consumers could already be involved at a concept testing stage. They could early on communicate to the company what product attributes they find entail risk so that these could be removed or modified before the product is launched. In such a way, the company could most effectively minimize consumer perceived risk. The results of this study should be extended to include other dimensions of risk. Another kind of risk some of the participants’ in our study touched upon (especially regarding the original innovation) was social risk. More specifically, some consumers had trouble identifying themselves as communicator users. They were not sure how they could benefit from using the communicator, or if the product would fit into their lifestyles; they did not want to give “the wrong picture” to other people. Social risk can
be linked to observability (Rogers, 1962, 1995); the more people see you use the product the more important it becomes to make the right choice. There also seemed to be some need to justify the purchase decision to a close reference group (such as the family). This kind of risk had apparently diminished for the modified successor. Since handheld devices were more common at the time of the second study, it is not surprising that respondents had accepted them as products people use. The social risk of having a handheld device did not matter that much anymore; it was more a concern of which handheld device to buy than whether to buy a handheld at all. References Basu, K. (1993), “Consumers’ categorization processes: an examination with two alternative methodological paradigms”, Journal of Consumer Psychology, Vol. 2 No. 2, pp. 97-121. Bauer, R.A. (1967), “Consumer behavior as risk taking”, in Cox, D.F. (Ed.), Risk Taking and Information Handling in Consumer Behavior, Harvard University, Boston, MA, pp. 507-23. Dowling, G.R. and Staelin, R. (1994), “A model of perceived risk and intended risk-handling activity”, Journal of Consumer Research, Vol. 21 No. 1, pp. 119-34. Farquhar, C. and Das, R. (1999), “Are focus groups suitable for sensitive topics?”, in Barbour, R. and Kitzinger, J. (Eds), Developing Focus Group Research: Politics, Theory and Practice, Sage, London, pp. 47-63. Fiske, S.T. (1982), “Schema triggered affect: applications to social perception”, in Clark, M.S. and Fiske, T.S. (Eds), Affect and Cognition, Lawrence Erlbaum Associates Inc, Mahwah, NJ, pp. 55-78. Gentner, D. (1989), “The mechanisms of analogical learning”, in Vosniadou, S. and Ortony, A. (Eds), Similarity and Analogical Reasoning, Cambridge University Press, New York, NY, pp. 199-241. Glazer, R. (1995), “Consumer behavior in high-technology markets”, in Kardes, F.R. and Sujan, M. (Eds), Advances in Consumer Research, Association for Consumer Research, Provo, UT. Gregan-Paxton, J. and Roedder John, D. (1997), “Consumer learning by analogy: a model of internal knowledge transfer”, Journal of Consumer Research, Vol. 24 No. 3, pp. 266-84. Goldstone, R.L., Medin, D.L. and Gentner, D. (1991), “Relational similarity and the nonindependence of features in similarity judgments”, Cognitive Psychology, Vol. 23, pp. 222-62. McDonagh-Philp, D. and Bruseberg, A. (2000), “Using focus groups to support new product development”, Institution of Engineering Designers Journal, Vol. 26 No. 5, pp. 2-7. Markman, A.B. and Gentner, D. (2001), “Thinking”, Annual Review of Psychology, Vol. 52 No. 1, pp. 223-47. Moore, G.A. (1999), Crossing the Chasm, Harper Business, New York, NY. Morgan, D.L. (1997), Focus Groups as Qualitative Research, 2nd ed., Sage, London. Novick, L. (1988), “Analogical transfer, problem similarity, and expertise”, Journal of Experimental Psychology: Learning, Memory, and Cognition, Vol. 14 No. 3, pp. 510-20. Novick, L.R. and Holyoak, K.J. (1991), “Mathematical problem solving by analogy”, Journal of Experimental Psychology: Learning, Memory, and Cognition, Vol. 17 No. 3, pp. 398-415. Rattermann, M.J. and Gentner, D. (1998), “More evidence for a relational shift in the development of analogy: children’s performance on a causal-mapping task”, Cognitive Development, Vol. 13 No. 4, pp. 453-78. Rogers, E. (1962), Diffusion of Innovations, The Free Press, New York, NY.
Consumer perceived risk
155
EJIM 8,2
156
Rogers, E. (1995), Diffusion of Innovations, 4th ed., The Free Press, New York, NY. Runyon, K.E. and Steward, D.W. (1987), Consumer Behavior, 3rd ed., Merrill Publishing Company, Columbus, OH. Sujan, M. (1985), “Consumer knowledge: effects on evaluation strategies mediating consumer judgments”, Journal of Consumer Research, Vol. 12 No. 1, pp. 31-46. Sujan, M. and Bettman, J.R. (1989), “The effects of brand positioning strategies on consumers’ brand and category perceptions: some insights from schema research”, Journal of Marketing Research, Vol. 26 No. 4, pp. 454-67. Veryzer, R.W. (1998a), “Discontinuous innovation and the new product development process”, Journal of Product Innovation Management, Vol. 15 No. 4, pp. 304-21. Veryzer, R.W. (1998b), “Key factors affecting customer evaluation of discontinuous new products”, Journal of Product Innovation Management, Vol. 15 No. 2, pp. 136-50. Ziamou, P. (2002), “Commercializing new technologies: consumers’ response to a new interface”, Journal of Product Innovation Management, Vol. 19 No. 5, pp. 365-74. Ziamou, P. and Ratneshwar, S. (2003), “Innovations in product functionality: When and why are explicit comparisons effective?”, Journal of Marketing, Vol. 67 No. 2, pp. 49-61.
The Emerald Research Register for this journal is available at www.emeraldinsight.com/researchregister
The current issue and full text archive of this journal is available at www.emeraldinsight.com/1460-1060.htm
The valuation of technology in buy-cooperate-sell decisions
The valuation of technology
Vittorio Chiesa and Elena Gilardoni Politecnico di Milano, Milano, Italy, and
157
Raffaella Manzini Universita` Carlo Cattaneo – LIUC, Castellanza (VA), Italy Abstract Purpose – In recent years companies are often involved in decisions concerning not only the external acquisition of technology, but also the opportunity to sell or to cooperate. Whatever the form of transaction, the valuation of technology and, in general, of the intangible assets represents a common and relevant problem. The paper aims at providing an analytical framework for valuing a technological asset. Design/methodology/approach – The study is based on a deep analysis of the academic literature, as well as corporate practice, and multiple case studies. They have been elaborated through the involvement of intellectual property managers of Italian firms and Intellectual Property consultants. An in-depth case study has been conducted in order to obtain more insights to enrich the framework, and discuss some of the theoretical and practical problems affecting the appraiser during a technology valuation. Findings – The results show that the valuation process is not simple, but quite multifaceted and that it is not systematic either in the literature or corporate practice. Originality/value – The proposed framework developed in the paper points out the most critical elements that could lead to a misleading and/or unusable and/or biased valuation; forces the appraiser to perform a systematic and rational analysis, coherent with the context of the valuation, solve some critical trade-offs and deal with contrasting elements; and increases the bargaining power of the appraiser during the negotiation with a potential counterpart, allowing a clear and complete understanding of the asset value. Keywords Asset valuation, Intangible assets, Decision making, Technology led strategy Paper type Research paper
Introduction In the last few years the importance of external technology acquisition has greatly increased and this is critical for the success of the innovation process within firms (Chatterji, 1996). In fact, in the area of technology, firms much more frequently contract-out their own technology to third parties or contract-in technology from external sources than they did in the past (Escher, 2001). In the literature this topic is widely analysed and discussed, especially from the perspective of companies that access external sources of knowledge and technology (Roberts and Liu, 2001). The following topics have already been given significant attention: This paper is the result of the joint work of the authors. Vittorio Chiesa wrote the “Introduction”, Elena Gilardoni “The valuation of intangible assets in literature”, “Techniques for valuing technological assets” and “The appraisal process: a reference framework” and Raffaella Manzini “The empirical study” and “Concluding remarks”.
European Journal of Innovation Management Vol. 8 No. 2, 2005 pp. 157-181 q Emerald Group Publishing Limited 1460-1060 DOI 10.1108/14601060510594710
EJIM 8,2
158
(1) the motivation that pushes companies towards external sources of knowledge and technology (Atuahene-Gima and Patterson, 1993); (2) the organisational forms for accessing external sources (Chatterji, 1996; Chiesa and Manzini, 1998); and (3) the management of technological collaborations (Chiesa, 2001). These studies reveal a significant problem, which affects the definition, organisation and management of collaborations aimed at exchanging technology and technical know-how. This problem is related to the valuation of technology-based assets, such as patent, process and technical know-how. The aim of this paper is to analyse the process of appraisal for technological assets involved in a buy-cooperate-sell decision, in order to develop a framework to support managers in dealing with this type of process. The valuation of intangible assets in the literature In the literature as well as corporate practices, great attention is given to intangible assets. An intangible asset is defined as a resource that does not have a physical embodiment and whose industrial and economic exploitation gives a claim to a future benefit (Bouteiller, 2000; Smith and Parr, 2000; Lev, 2001). There are several intangible assets’ classifications (Brugger, 1989; Anson, 1998, 2001; Gotro, 2002); one illustrative, although not comprehensive has been put forward by the Financial Accounting Standards Board (Holzmann, 2001) (refer Table I). As shown in Table I, the term “intangible asset” covers a wide range of resources; in fact it could be: . part of an integrated group of other business assets: such as trained staff, mailing lists, customer lists, agreements; or . an independent economic unit: such as patents, copyrights, trademarks, technological know-how, technical drawings. This paper focuses on assets belonging to the second group, i.e. on separable and identifiable assets (Brugger, 1989; Guatri, 1989). In particular the paper considers technology-based assets such as patents, process and technical know-how, engineering
Intangible assets
Examples
Customer-based or market-based assets
Customer base, mailing list, distribution channels, presence in geographic location or markets Technical expertise, assembled workforce, trained staff Favourable government relations, outstanding credit rating Consulting agreements, advertising contracts, rights (water, gas allocation, lease) Patents, copyrights, trademarks Computer software and programs, technical drawings, database
Workforce-based assets Corporate organizational-based and financial- based assets Contract-based assets Statutory-based assets Technology-based assets Table I. Intangible assets
Source: Holzmann (2001)
drawings, computer software and databases. These technology-based assets[1] can generate income (and therefore value) separately from the business enterprise and can be bought, sold or licensed-in/out as independent assets. This phenomenon is becoming increasingly relevant, and it is highlighted by the fact that firms are increasingly relying on external sources of technology to support their innovation process (Roberts, 2001; Jones et al., 2000; Howells, 2000; Chatterji, 1996; Chatterji and Manuel, 1993). The nature of technological innovation – the need for technology fusion, the increasing specialisation in knowledge production, the pressure on time and costs – forces companies to search for partners able to support their innovation process, particularly those that serve their need for technological assets (Kodama, 1992; Chiesa and Manzini, 1998; Chiesa, 2001). In this context, a “market for technology” is emerging (Arora et al., 2001), in which technology is exchanged among different companies through buy/sell transactions or within several forms of co-operative agreements (such as joint ventures, alliances, consortium etc.). Whatever the form of transaction, the commercialisation of technology calls for a definition of the “value” of the subject technology. In the existing literature there are several articles dedicated to the importance of these technological assets and to the problem of their valorisation. The importance and the value of technological assets have increased consistently in the past two decades. In fact, today the value of intangibles exceeds the value of tangibles by six-seven times (Lev, 2001); whilst at the beginning of the 1980s the value of tangible assets was twice that of intangibles. A great deal of research concentrates on this first aspect (Morris, 2001; Korniczky and Stuart, 2002). In the past, companies derived a significant part of their own value from hard assets and manufacturing processes (Gotro, 2002) investing heavily in the use of tangible assets to gain a competitive advantage. Today, technological assets play a key role in determining the value of the company (Daum, 2001). This is consistent with the changes affecting the competitive context. In recent years the competitive context has become more and more dynamic and turbulent. In other words, there is not only competition among tangible assets, that either change very rapidly or are not able to sustain the competitive advantage over the long run, but also (and even more) with intangible ones. This is particularly emphasised in the area of technological assets. Hence these intangible assets are becoming a powerful tool in facing competitive market forces alongside the traditional assets (AAVV, 1998). The second theme analysed in the literature is the valorisation of intangible assets. The valuation of these types of assets is critical for company shareholders. It is critical in assessing the true value of shareholder companies. It is also an important tool for the management of the firm in supporting the decision-making process. The literature contributions are focused on different aspects of the valuation. A number of authors have analysed the methods and techniques applied to perform a proper economic analysis. These methodologies can be classified into two main groups (Mun, 2002): (1) traditional methods (among them, the most important are the cost, market and income method); and (2) innovative methods (among them the most important is the real option method).
The valuation of technology
159
EJIM 8,2
160
These methodologies are diffused not only in the academic literature (Anson, 1998; Mard, 2001), but also in corporate practice (Mullen, 1999). As regards these contributions, the valuation techniques will be presented in the next section. Other authors have frequently discussed different problems, such as: . the coherence between the techniques and the type of intangible asset (Smith and Parr, 2000); . the coherence between the techniques and the objective of leveraging technology (Khoury, 1998); and . the linkage between the appraisal method and a specific form of transaction (e.g. licensing) (Berkman, 2002). Little literature has been written on the valuation process and in particular about (1) the most important principles of the appraisal process; (2) the specific activities to be conducted; and (3) how the process should be organised and managed. Some contributions derived from available consulting literature, which draws on some guidelines from the direct experience of companies. There are, in fact, different international valuation firms that provide independent valuation services to the business, financial and legal communities (such as Appraisal Economics Inc., The Patent & License Exchange, Inc. or Willamette Management Associates). They define the main steps making up the process as: . Definition of the problem. This implies the identification of the intangible assets to be valued, the description of the scope of analysis, and the identification of some limiting conditions such as the assumed accuracy of data used in the appraisal. . Preliminary analysis and data selection and collection. The appraiser must analyse and understand the forces, which guide and influence the entire valuation process, such as, the relative bargaining power and the relationship existing between the buyer and the seller. . Application of the three traditional methods. The practice focuses strongly on the cost, market and income method. . Reconciliation of values. When an analyst uses several valuation methods, he or she rarely obtains the same value indications. In this case he or she has to define a range of “significant” values so as to understand why a method is producing outlier value indications. The consulting literature shows that the appraisal process is composed of different and critical activities. The overall weakness of these contributions is that although a set of activities is described, a systematic view of the whole process (of the links among activities and of the relative managerial problems) is not discussed in detail. In view of what is expressed in the academic and consulting literature, the attempt here is: . to study in-depth the entire appraisal process and activities, thus presenting a systematic vision of the entire process;
.
.
to understand how the management of different activities influences the effectiveness of the valuation, identifying the critical problems to be solved during the appraisal; and to suggest some guidelines by ascertaining some solutions to the identified problems.
The valuation of technology
161 Techniques for valuing technological assets Appraisal methods and techniques are broadly classified into: . cost method; . market method; . income method; and . real option method. These valuation methods are well documented in an extensive bibliography: Gilardoni, 1990; Anson, 1996, 2001; Khoury, 1998; Stiroh and Rapp, 1998; AAVV, 1998; Martin, 1999; Razgaitis, 1999; Reilly and Schweihs, 1999; Mard, 2000; Mard et al., 2000; Smith and Parr, 2000; Anson and Serrano, 2001; Damodaran, 2001; Khoury et al., 2001, King, 2001; Mard, 2001; Spadea and Donohue, 2001; Benninga, and Tolkowsky, 2002; Hoffman and Smith, 2002; Khoury, 2002; Mun, 2002; Tenenbaum, 2002; Khoury, 2003; Park and Park, 2004. In this paper, the methods and techniques are presented for an illustrative purpose and are not intended to reflect a comprehensive review of valuation issues. The cost method The cost method appraises the value of technology assets by measuring the expenditure necessary to create and develop the technology asset. This method is based on the economic principle of substitution in which a prudent investor would pay no more for a technological asset than it would cost to create or acquire a similar asset. The technology asset value is related to its cost structure. The structure of cost to be considered during the valuation process can vary. In the literature, there are several definitions of cost which include: . Cost of avoidance (or cost savings) quantifies either historical or prospective costs that are not incurred by the owner of the technology due to the ownership of the subject technology. . Trending historical costs. Current historical asset development costs are identified and quantified and then “trended” to the valuation data by an appropriate inflation-based index factor. . Re-creation cost (or reproduction cost) is the total cost, at current price, to develop an exact duplicate or replica of the subject technology. This duplicate asset would be created using the same materials, standards, design, layout and quality used to create the original technology. . Replacement cost is the total cost to create, at current price, an asset having equal utility[2] to the technology subject to be appraised. However, the replacement technology would be created with modern methods and developed according to
EJIM 8,2
162
current standards, state-of-the-art design and layout and the highest possible quality. Accordingly, the replacement technology may have greater utility than the subject technology. Among these, the most common types adopted in practice are reproduction and replacement costs. However, many authors consider the structure of cost irrelevant to establish the value of a technological asset. At most, it could be used as a benchmark value. In fact the cost-based method has too many weaknesses. It does not take into consideration the amount of economic benefits related to the ownership and exploitation of assets, whereas it includes the sunk R&D costs. The second main weakness is the implicit assumption that expenditure should always create value, as a matter of fact not all costs lead to successful assets. Another weakness is related to the efficiency of investments. The cost method assumes that the level of past investment-effectiveness will be the same in the future. This is a false assumption as there are several situations in which an investment can be characterized by different levels of efficiency. This method is usually used when the application is at such an early stage of development that its market application is still unclear. In this case, in fact, the level of uncertainty is higher and the knowledge of the future business is very limited. In conclusion, the cost-based method appears inappropriate to establish the value of the technology, as it is applicable only when the extent of uncertainty is very high. Even then only a benchmark value is provided. The market method The market method measures the present value of future benefits by obtaining a consensus of what others in the marketplace have judged it to be. This provides an indication of value by comparing the price at which similar intangibles have been exchanged between willing buyers and sellers. In other words, when the market approach is used, an indication of the value of the specific item of intangibles can be gained from looking at the price paid for comparable asset. This appraisal method is based on the economic principle of competition and equilibrium; in a free and open market the supply and demand factors will drive the price of all goods to a point of equilibrium. This method is largely intuitive and easily understood, for this reason it is widely adopted. The application of the market method can be summarized as follows: (1) Identifying the units of comparison (comparables). In order to do this, the selected units have to be comparable each other. Elements commonly looked at to select the appropriate comparables are: industry, market share, capital investments required for the exploitation. (2) Identifying the appropriate information. For each comparable, the appraiser has to collect data about: . The transaction, i.e. the value at which the transaction has been concluded. . An economic measure, such as revenue, or margin or net profit associated to the technology-based asset, or, alternatively, an operative measure such as, for example, the number of users of the technology.
(3) Calculating the ratio between the value of transaction and the economic or operative measure. This ratio is called “multiple”. (4) Applying the “multiple” to determine the value of the technology. Requirements for successful use of this approach include the following: . the market has to be active: having a few number of exchanges does not make a real market; and . the market has to be public: the information of exchanges have to be available. The main weakness concerns the point that transactions are unique (referring, for example, to the specific characteristics of the buyer and/or of the seller), in fact this is not considered by the market method as it assumes that the value of the transaction is similar to that of comparables. The income method The value of any asset can be expressed as the present value of the future stream of financial benefits that can be obtained from the exploitation of the specific technology considered. This method is based on the principle of expectation. For the application of this technique, the calculation of: (1) the future cash flows, related to the specific asset; (2) the time horizon considered, i.e. time in which the above cash flows can be generated and reliably estimated; and (3) the actualisation rate, which reflects the business risk and is usually estimated with the Capital Asset Pricing Model (CAPM) is needed. The expression of the value of the asset is shown in the income method: VT ¼
T X NCFðtÞ ð1 þ kb Þt t¼1
where VT is the technological asset value, NCF(t) the net cash flow, kb the actualisation rate reflecting business risk; and T is the time horizon. This method is the most accurate to value technology as it considers the specific operating environment (market size, pricing, cost structure, risk) in which the technology is exploited. However, its practical application may present problems, as the required data can be difficult to estimate. The real option method The cost, market and income methods all have significant limitations because they consider given technological assets without considering the opportunity (but also the risk) embedded in them. In particular the income method assumes that the projection will meet the expected cash flow and it handles the risk in the actualisation rate. However, cash flow is usually stochastic and risky by nature, the risk has different characteristics and can change across project time. A method that overcomes this limitation is the real option method; it, in fact, is considered an extension of income analysis. The real option is an instrument to
The valuation of technology
163
EJIM 8,2
164
respond to uncertain events. The theory behind option pricing was originally developed for use in financial development. It has recently received growing attention in R&D and in new technology development because it can support the decision process. In fact, not all decisions are made in the present but are deferred to examine the future. The real option is also applied to establish the value of technological assets during a transaction process: when information is incomplete and, in particular, unknown the appraiser can (has to) use the option theory in order to make risk and uncertainty explicit. This new method is tailored to deal with uncertainty and flexibility. The adoption of this method requires the identification of factors such as: . present value of project cash flows; . standard deviation of the project value; . investment cost of project; . time left to invest in; and . risk free interest value. These factors are used to calculate the value of intangibles using a specific formula; the most famous is The Black-Scholes Model. The real option method represents a new way of thinking; uncertainty is considered an opportunity to create economic value. These methods have not been presented in order to give a comprehensive review of valuing issues related to valuation methods, but to underline their strengths and weaknesses (Table II). As shown in Table II, each valuation method: (1) requires specific data and information and different resources in terms of the appraiser’s competencies and skills; and (2) is suitable only in specific situations and contexts. In addition after balancing advantages and drawbacks for each valuation method, the appraiser could choose to use more than one method simultaneously. The appraisal process: a reference framework According to the current literature presented in the above sections, a framework has been developed aiming to give a systematic vision of the appraisal process and to identify the most critical problems. This framework is shown in Figure 1. Within the framework, three different elements should be distinguished: activities, constraints and links. The activities represent the logical phases of the appraisal process: . identifying the unit of analysis; . identifying the aim and scope of analysis; . identifying the most appropriate valuation method(s); . comparing available and necessary data; . collecting data; and . determining the value of the asset. The process is affected by constraints such as: . available data;
Major disadvantages
Some elements that in the other methods are considered implicitly, such as the income generating capacity, the appropriate cost of capital and the risk associated with the asset, are made explicit Adaptable and flexible method Well know and widely recognized method
Option calculation require to use a complex formula The project value uncertainty is difficult to estimate The underlying has to be estimated
The projection of the future net cash flows is difficult The estimation of the actualisation rate is complicated. It has to consider not only the cost of capital but also the risk associated to the intangible asset The required data and information need to be estimated
A practical and logical method applicable to all type Most technological assets are not traded frequently of intangible assets enough to be able to establish a comparison Most direct method The intangible assets are commonly traded within a business and it is difficult to dissociate them from the business Get enough details on similar transactions is difficult The market is characterized by the buyer’s interest and this may bring in distortions
The idea of the minimum value is given The future earnings of the asset is not reflected The information and data are available and highly The efficiency of past investments is not considered reliable (reproduction cost) There is the implicit assumption that expenditures should always create value
Major advantages
Real option method Find the value of the asset starting from the Most complete method appraisal of the future benefits associated to the Uncertainty and variability are considered asset and considering uncertainty and variability in future outcomes
Income method Find the value of the asset starting from the appraisal of the future benefits associated to the asset
Find the value of the asset starting from its recreation cost Replacement cost Find the value of the asset starting from the recreation of its own utility Market method Find the value of the asset starting from sales comparison
Cost method Reproduction cost
Methods and scope
The valuation of technology
165
Table II. The major appraisal methods
EJIM 8,2
166
. .
necessary data and resources/time required to apply a method; available time and allocated resources.
The links represent the relationship between two or more logical phases. Obviously such links do not indicate a sequential relationship, but a logical one. As a consequence, in some case, different phases can be conducted contemporarily, and/or there could be feedbacks throughout the process. The activities of the appraisal process Identifying the unit of analysis. A correct appraisal process starts by identifying the unit of analysis. Problems can emerge during the appraisal process of the technological asset. Even though an asset is an independent economic unit, it can be a part of a product, system or service. In a number of cases, there is a difference between the technological asset and the product, which embeds the technology and is available on the market. Electrical products representative of the industrial sector present such a case, where the technological asset is a component of a system. For example, the microchip is a component of several products such as PCs and mobile telephones. If the technological asset is a component of a system, it is important to recognize that not all the income (or value) generated by the system is related to it. In some cases, indicators measuring the technological asset’s value compared to the value of the system can be identified. But it is not possible to suggest a general method to identify such indicators at this point. In fact it is due to the technological asset and the system. For example, a possible solution can be identified considering the incidence of the reproduction cost of the technological asset on the system. The same incidence could be used to estimate the value of future cash flows generated by the technological asset analysed. At some point it would be interesting to understand the exact contribution of the technological asset to the functioning of the entire system. The higher the contribution, the higher the value generated by the technology. Another aspect of the problem arises when the technological asset has different uses, i.e. when it can represent, in different situations, an end product, a component or a work in progress. For example a company can use a pharmaceutical molecule as an end product and can sell it on the market; the same pharmaceutical molecule can be used as a catalyst in a chemical process. This implies that different values can correspond to various uses, i.e. that the unit of analysis is not the single molecule, but the “molecule þ its relative use”. As pointed out in these examples, recognizing the right unit of analysis can be difficult. But it is important for the appraisal process because it allows
Figure 1. The appraisal process framework
for making a correct definition of the context and borders of analysis. Thus, it is fundamental because it defines the unit of reference for all the other activities. The proposed framework aims at putting into evidence these issues and, in synthesis, at alerting the appraiser to pay attention to these problems: . An intangible asset can be a part of a product or system, even if it is considered as an independent economic unit; in this case, it is necessary to separate the value generated by the intangible from the value of the product/system. . The same intangible asset can have different uses, in this case it is important to identify a specific use, to which a specific value is associated. Identifying the aim and scope of analysis. The valuation of technological assets can be performed in different contexts and, hence, may have many different aims (Rabe and Reilly, 1996; AAVV, 1999). The context of the valuation can be: (1) The accounting process. Owing to the rising interest and relevance of intangible assets, a correct accounting of intangible assets is necessary. This, in fact, allows managers and stakeholders to increase their own knowledge of the dynamics of value creation and to define the value to be considered within the accounting reports and the external financial reports; (2) The decision making process. Valuing technological intangibles is critical for managers that are required to make decisions on: technology acquisition vs internal development; direct vs indirect technology exploitation; technology selling vs technology licensing; (3) The transaction process. An intangible asset analysis and valuation is often required to define the terms of the contract related to the transaction process (e.g. the negotiated price). The main commercial transaction forms are: . The transfer of ownership. This category includes all business transactions, in which there is a complete shift of the ownership title of the asset that a part grants to another without restriction. . The transfer of the right of use. It is the right that the owner of a technological asset grants to a third part, under the payment of an earning (Brooke and Skilbeck, 1994). (4) The infringement process. Sometimes the intellectual property rights on intangibles can be infringed and in these situations a valuation of damage is required. (5) The bankruptcy process. In dividing and distributing the debtors’ assets, the value of intangible assets has to be established to identify, for example, any cancellation of debt income. The comprehension of the aim and scope of the analysis affects the appraisal process since it influences the available data and the identification of the most appropriate valuation method(s) (Figure 1). These links will be analysed later. Identifying the aim and scope of the analysis often requires a specification of the set of actors potentially involved in the use of the intangible valued. For example, in a transaction or in case of bankruptcy, the identification of the potential buyer/seller or
The valuation of technology
167
EJIM 8,2
168
licensee/licenser or creditor is required. The specific characteristics of the counterpart (its competence, marketing strategy, cost structure, etc.) affect the valuation, since they determine the specific data to be used when the valuation method is actually implemented. It is critical to underline: . The correct identification of the context of analysis influences the identification of the most appropriate method(s) for the valuation and the relative application. . The identification of the aim and scope of the valuation defines the context in which the valuation takes place. It allows for the improvement of the accuracy of data used and the quality of the valuation itself. Identifying the most appropriate method(s). The appraiser has to identify a method (or methods) to be used to determine the intangible’s value. As shown in Figure 1, this activity is influenced by several factors: . The availability of time and resources. A first selection of the method is made considering the level of resources allocated to the process. In fact, the appraiser has to select a suitable method, according to the resources (in terms of quantity, but, first of all, competence) that are actually available. Some sophisticated method, such as the income or the real options method, can be used only by expert and trained analysts. . The identification of the aim and scope of analysis. For example, consider an appraiser who has to support the accounting process. In this case, the accounting rules must comply with the historically sustained costs that have to be considered and, hence, the cost method is required. Instead, if the analysis is conducted in order to support a transaction process, the appraiser has to value the future potential benefits generated by the asset and the cost for exploiting it. Hence, a method able to take into account the future benefits associated with the intangible asset is necessary, such as the income or the real options methods. In synthesis, it is important to observe that the choice of the valuation method is not trivial, and that several elements are to be considered. In particular, the advantages and disadvantages of the various methods (shown in Table II) should be evaluated in the light of the specific aim, scope and resources identified for the analysis. Comparing necessary and available data. As mentioned previously, in the literature review the major appraisal methods are widely explained and discussed. The main characteristics of the methods are described in Table III, in terms of necessary data, time horizon and resource required. These must be considered to correctly use the techniques. In particular, matching necessary data with available data is critical. As a matter of fact, beyond theoretical considerations about the coherence of the method with the specific context, aim and scope of the valuation, a definite set of data is necessary to adopt each method. As a consequence, if the necessary data are not available (due to a lack of time, resources or competence etc.) the (theoretically) selected method cannot be adopted. In other words, comparing necessary data with available data allows for the identification of the “usable” method(s), among those previously selected as appropriate for the specific case.
Necessary data
Future
Future
Present
PresentFuture
Past
Time horizon
Notes: (a)The amount of resources (time) required to apply the method increases following the arrow; in fact the methods are listed in increasing order of sophisitfication (Pitkethly, 1997). For example the cost method (e.g. reproduction cost method) could require a lower amount of time and resources than income method. This is justifable because the income approach involves some element of forecasting the future cashflows. Moreover, the higher level of sophistication, the higher is the amount of time required for a correct aplication ans the hgher is the level of competences and skills needed
Cost Reproduction cost Material: it includes expenditures related to the tangible elements of the intangible asset development process Labour: it includes expenditures related to the human capital efforts associated with intangible asset development process Entrepreneurial incentive: it is the amount of expenditures required to motivate the owner of the intangible asset to enter into the development process or to produce a new patent, trademark, chemical formulation, etc. The previous types of cost should be adjusted in order to: express the historical costs to current price (a capitalization rate has to be considered); take into consideration the obsolescence, i.e. the reduction in the value of the intangible asset due to improvements in technology or its inability to perform its originally function (e.g. the remaining useful life) Replacement cost Material: it includes expenditures related to the tangible elements used during the intangible asset development process Labour: it includes expenditures related to the human capital efforts associated with intangible asset development process Entrepreneurial incentive: it is the amount of expenditures required to motivate the owner of the intangible asset to enter into the development process or to produce a new patent, trademark, chemical formulation, etc. Market Similar transactions: the comparison is made with reference to transactions involving similar assets that have occurred recently in similar markets Income Future net cash flows: incremental revenues; decremental expenses; additional investments Time horizon: the period during which the intangible is expected to generate net cash flows Actualisation rate: the future net cash flows will be actualisated Real option Underlying: it is the current value of asset, that is the present value of expected cash flows Exercise price: it is the present value of investment cost Time of expiration: time until opportunity disappears Risk: project value uncertainty Interest rate: risk-less interest rate
Methods
Resources required for a correct application (a)
The valuation of technology
169
Table III. Characteristics of major appraisal methods
EJIM 8,2
170
In several cases the available data are not coherent with the context and bundles of valuation and/or with the necessary data. Sometimes, in fact, the specific context requires the use of a particular valuation technique (e.g. the income method), but the necessary data are not available (e.g. the projection of future net cash flow). However, the analyst has to resolve the problem and come to a valuation. In this case, the appraiser may decide to accept a lower level of accuracy of the valuation, using “proxies” or fuzzy estimations for the unavailable data. The value indication will be less precise, but will be obtained with a method, which has the right “perspective”. The result obtained can be improved, in the future, with greater resources, time, etc. As pointed out above, defining the usable method is a critical step that implies for coherence with the other steps previously described. In some cases, this step forces the appraiser to accept compromises or to select the “best solution” when the “optimal solution” (in terms of accuracy, precision, overall coherence etc.) cannot be identified. Collecting data. In order to determine the value of the technological asset, it is important to access and collect data concerning different aspects (financial, operational and market oriented) and different time (historical, contemporaneous and future). The unit of analysis, the aim and scope of the valuation and the usable method identified, determine the types of data that have to be collected (Figure 1). During this activity the main problems are related to: . The identification of data sources. The necessary data, in general, has to be partially collected outside the appraiser’s company. This means that data sources are both internal and external, with respect to the appraiser’s intangible[3]. Obviously, data that have to be collected from external sources can create several problems, due to the confidentiality and “secrecy” of some information. Some public sources can be used, such as legal and trade publications, databases, newspapers. Even if the source of information is internal, the data are not always easily accessible, for example, when data are “dispersed” among databases and systems that are not integrated. . The identification of the “right” data and information to be collected, according to the aim, scope and context of the valuation. Particularly when a great deal of data are available, selecting what is really necessary can be difficult. . The completeness and accuracy of data. The same information, obviously, can be collected with a different level of accuracy. This affects the final result: a valuation is more reliable as the accuracy of data and information increases. But as previously explained, the accuracy of data is influenced by time and resources allocated to the process. In fact, increasing time and resources available usually means availability of a more complete, detailed and precise set of data and information. Determining the value of the asset. This is the phase in which the selected valuation technique(s) is (are) actually applied in order to come to the final value of the technological asset. This activity can present different problems concerning: (1) The method. Each valuation method presents some specific criticalities (as shown in Table II) that make the application difficult.
(2) Its correct application. Sometimes there are difficulties related to the correct analysis and use of data previously collected. This challenge is influenced not only by the appraiser’s capability and experience, but also by the time and resources allocated to the process; and (3) The management of the different values obtained when more than a single method is applied. Generally, different valuation methods produce different results. This is coherent with the fact that the technological asset cannot have a definite, precise, “universally” valid value. Different methods and results allow us to define a range of significant values. The single results are valid: . in the specific context and hypothesis under which they have been calculated; and . in relation to the level of accuracy of data and information. The constraints of the appraisal process The constraints can be classified into: . necessary data and resources (time) required to apply a method; . available time and allocated resources; and . available data. Necessary data. This is the information needed for a correct application of a definite method. It is determined by the characteristics of each method and, in this sense, cannot be modified. Hence, it represents a critical constraint within the process. The appraiser has to analyse in-depth the information needed to apply to a specific method for implementing it in the correct manner. Resources (time) required. These represent the resources needed to properly apply the usable method and to establish the value of the technology with a high level of accuracy. Techniques, such as the real options method, undoubtedly require a complex data elaboration, a sophisticated analysis and, hence, a vast amount of time and competent resources. Available time and resources. These are usually determined by decisions taken at top management level, depending on the relevance assigned to the valuation. These variables affect the level of precision of expected results, influencing first of all the identification of the usable method, the collection of data (and the relative completeness and reliability) and the implementation of the method (and the relative theoretical coherence). Available data. This is the information that can be accessed during the appraisal process and that is identified by the specific context of analysis. In more detail, the available data is influenced by: . The unit of analysis. For example, in the case of very innovative technological assets, data on the future cash flows is usually unavailable. . The aim and scope of the valuation. For example in the context of the accounting process, data and information is generally accessible and can be obtained in a short time and with little cost. In the case of transactions, another element is the identification and knowledge of the potential buyers. As previously explained, this information is necessary in order to quantify particular data. Also the
The valuation of technology
171
EJIM 8,2 .
appraiser’s position, with respect to the valued asset, is important. In fact if the company possesses the asset, undoubtedly more data and information will be available than for an external appraiser. The available time and the level of allocated resources. These elements impact on the quality and the degree of accuracy and completeness of available data.
172 The links within the appraisal process As shown in Figure 1 there are several links within the framework. A careful management of the entire valuation process is assured by links; in fact activities and constraints are consistent due to the links. The importance of same links and their meaning have already been discussed above, in particular: . “Unit of analysis – aim and scope of analysis” link was analysed during the definition of activity related to the identification of the unit of analysis. . “Aim and scope of analysis – available data” and “aim and scope of analysis – most appropriate method” links are presented within the description of “identifying the aim and scope of analysis” activity. . “Time and resource allocated – most appropriate method” link is illustrated throughout the discussion of “identifying the most appropriate method” activity. . The links related to the activity “compared available and necessary data” have just been examined during the presentation of this activity. In the framework, the appraisal process presents feedback. Feedback is required to assure the coherence of the entire valuation process with respect to constraints. Sometimes a loop is required if it is unable to identify a usable method during the comparison between necessary and available data. In this case, the appraiser has to repeat certain activities (the identification of available time and resources, the identification of the appropriate method, etc.), modifying the relative conclusions. The empirical study The described framework has been based upon (1) the most recent theoretical state-of-the-art literature; and (2) the empirical cases illustrated in this literature. An empirical research was necessary to enrich the framework and improve its completeness, clarifying: . the factors considered by companies during the valuation process; . how the management of different elements that compose the framework, has an effect on the process; and . the main issues and the critical problems faced by the appraiser during the whole process. The empirical research comprises qualitative interviews and a case study. The research has been conducted by interviewing five managers of private and public
institutions, directly involved in the problem of valuing technology-based assets (Table IV). The case study concerns the Technology Transfer Office (TTO) of Politecnico di Milano (an Italian University). The TTO has been appointed to manage the economic and industrial exploitation of patents belonging to the University. The case study has been conducted to: . apply the framework to show the meaning of activities, constraints and links in a real and specific context; . enrich and complete the framework; and . highlight and discuss the problems faced by the appraiser during the whole process.
The valuation of technology
173
Identifying the unit of analysis The case study concerns the licensing of a patent applied for in the surgical field. The patent concerns a specific aortic cannula that will be used during surgical open-heart operations. In these particular operations the heart is stopped and a machine is used to pump blood into the patient’s aorta through a cannula. The new device differs from the traditional cannula mainly in its terminal section that enters the patient’s body. The traditional cannula has a pre-established and rigid final section, whilst the new device has a flexible final section folding in on itself and, therefore, with a variable area. This new device could bring about many advantages that are briefly described below: . The new device is less invasive. Having a flexible terminal section, the surgical cut is smaller than with the traditional devices. . The terminal section is flexible. For this reason it can be larger than in the traditional cannula. Moreover, the speed of blood flow, pumped from the machine, is lower and does not damage the walls of the blood vessel (this problem often arises with the traditional cannulae). . The flexible final section folds in on itself, when the heart starts working again, limiting the clogging of the aorta.
Firm
Brief firm’s description
Role of interviewed people
Italtel
Supplier of telecommunications devices Group operating in energy cables, telecom and tires businesses Technical University
Industrial Property Manager
Pirelli Politecnico di Milano Snamprogetti STMicroelectronics
Engineering company of the ENI Group (petrochemicals) Global semiconductor company
Industrial Property Manager Director of Technology Transfer Office Licensing & Technology Planning Department Manager Industrial Property Manager
Table IV. Institutions involved in the research
EJIM 8,2
Examining the object of analysis, it is evident that the technological asset corresponds to the product to be sold. Therefore, establishing the value of the surgical device means appraising the technological asset. In this case the identification of the unit of analysis does not present any trouble.
174
Identifying the aim and scope of analysis The University first identified and then exploited the patent. Licensing out is preferred, as an indirect form in the exploitation of patents. The main reasons underlying this policy are: . maintaining a wide patent portfolio gives a positive image; and . the selling of patents can create problems with the further researches of the University. An analysis was carried out to support the transaction process and the TTO had to quantify the value of the patent in order to define the economic benefits arising from the licensing out of the patent. The identification of the potential licensee was required to improve the understanding of the context of analysis and consequently the accuracy of data. Also in relation to this aspect, the TTO analysed the potential buyers of the license briefly described in Table V (letters used for confidentiality). The TTO selected two licensees, firms A and F, among the different firms operating in the market for aortic cannulae. Firms D and E were not considered due to unavailable data (e.g. USA market share). Among the firms, the TTO selected: . Firm A because it is a worldwide leader in the sector of producers of pump machines for surgical open-heart operations and has recently acquired a small firm that produces traditional cannulae. . Firm F because it is a small company, specialising in the production of cannulae. The first (A) has 38 per cent of the global market share, while the second (F) covered 6.5 per cent of the cannulae producing market (Table V). This case study shows that: . The TTO had to identify the potential licensees in order to clarify the context of the valuation, and to define correctly and accurately the boundaries of analysis. . The TTO has introduced subjectivity into the appraisal process (two specific potential licensees have been selected). This choice dramatically affects the following steps of the valuation. Firm
Table V. Potential buyers of the licence and their relative market share
UE (per cent)
USA (per cent)
Average (per cent)
40 22 10 8 7 5
36 33 12 n.a. n.a. 8
38 27.5 27.5 n.a. n.a. 6.5
A B C D E F Note: n.a. ¼ not available
Identifying the most appropriate method(s) After the identification of the context in terms of unit of analysis and aim and scope of analysis, the analyst has to select the most appropriate method(s). As explained in Figure 1, this activity is influenced by the aim and scope of analysis. Owing to the transaction process, the appraiser, very likely, will adopt a method able to consider the future benefits associated with the patent. So the cost method does not seem adequate to the aim of this valuation. This method does not take into account incremental profits that are critical for an external buyer[4]. On the contrary, the income and the real options methods seem to be the best option, according to the aim and scope of the analysis. The market method can be considered adequate as well, even if it is less precise and complete. Beyond the aim and scope, the available resources have to be considered. From this point of view, traditional techniques are preferred, because the competencies needed to apply the innovative techniques are lacking. For this reason, the income method and the market method are the most appropriate methods.
The valuation of technology
175
Comparing necessary and available data The following Tables VI and VII, show the situation of available and necessary data in the case of the valuation of the cannula according to the two most appropriate methods identified: income and market methods. The following considerations emerge from the analysis of these elements: . Critical information allowing for the application of the income method is the quantification for future net cash flows sprung from the use of the new device (Tables III and VI). The advantages related to the use of the new cannula cannot be easily expressed in terms of increases in revenues and/or decreases in expenses except in the case of claims for assurance related to surgical problems. . Data needed for the market method is available, since it is possible to find out the current price of similar devices and the dimension of the market (Table VII). The above analysis points to the use of the market method. The selection of the appraisal method is linked not only to the comparison of available and necessary data, but also to the identification of the most appropriate method(s) (Figure 1). In particular the previous analysis has shown that the income Necessary data Future net cash flows (incremental revenues; decremental expenses; additional investments) Time horizon Actualization rate
Not available Available Available
Table VI. Necessary data vs available data (income method)
Necessary data Units of comparison Find the parameters on which carrying out the comparison
Available Available
Table VII. Necessary data vs available data (market method)
EJIM 8,2
176
method is not coherent with the data available, even if, from the point of view of the aim of the valuation, this seems to be the most desirable method. Therefore the choice of the method falls on the market method, as the necessary data are available. The market method is also coherent with the aim and scope of the valuation (i.e. the transaction), since it considers the future economic benefits related to an economic exploitation of the asset. As a consequence, the market method has been selected; the application of this method is quite easy and requires the only quantification of the selling price and the potential market. The case study underlines that this step is influenced not only by the comparison between available data and necessary data, but also by the specific context of valuation. Collecting data To implement the selected method the TTO had to know the market size and the market price of similar devices. The TTO decided to use only external data sources. In fact it examined several market analyses and interviewed many companies working in the field of surgical instruments. Owing to external data sources the TTO was able to estimate the worldwide market for aortic cannulae: as 1,100,000 surgical open-heart operations during year 2000 in 3,000 heart-surgical centres located in 80 countries. In order to estimate the applicable price of the cannula, the price of similar existing products have been analysed. The price applied to the final users (that are the heart-surgical centres) is around e50 per unit. Determining the value of the asset In order to establish the value of the patent the appraiser has to draw up some hypotheses. In this case, the hypotheses are related to (1) the success of the experimentation of the new cannula; (2) the doctor’s ability to pick the advantages related to the new cannula; and (3) the strategic and marketing actions carried out by the manufacturer versus the medical class for promoting the new medical device. On the basis of previous elements the TTO has estimated a 10 per cent penetration rate for both the firms (even if the TTO for firm F could assume a higher rate than firm A’s because F mainly concentrates on the cannulae market (Table VIII). Table VIII presents the served market in terms of the number of cannulae that could be sold. To establish the potential value of the patent, the TTO has to consider not only the size of the potential market, but also the price of the new cannula (Table IX). 1.100.000 u
Table VIII. The served market of the patent
World-wide market
A
F
Market share (per cent) Potential market Penetration rate (per cent) Served market
38 418,000 units 10 41,800 units
6.5 71,500 units 10a 7,150 units
Note: aEven firm F is mainly concentrated on the cannulae market, the TTO decide the use a penetration rate equals to the firm A’s one
As shown in Table IX, the value ascribed to the patent will be e2,090,000 if the patent is licensed to the worldwide leader (firm A), or e357,500 if the licensee is a smaller company (firm F). As we can see the value is strongly dependent not only on the formulated hypotheses, but also on the characteristics of the licensee. This underlines the importance of the right identification of the context of analysis and especially of the licensee.
The valuation of technology
177 The appraisal process The case study showed how the process should be characterised by contrasting elements. In particular during the identification of valuation method, some contrasting elements emerged. Examining the unit, the aim and scope of analysis, the income method would have been better, but the necessary information and data were not available. This problem was solved by selecting a method that represented a “second best” solution, from the point of view of the aim and scope of the analysis, but one that was also coherent with the data, resources and time available. An interesting and more complete analysis could be conducted to discuss the valuation of the patent. In fact, it would be interesting to understand how the value of the patent can change considering other transaction forms, such as the transfer of ownership or a solution in order to take direct advantage of the patent. Concluding remarks In the literature as well as corporate practice great attention is paid to the problems of the valuation of the technological assets, however an in-depth analysis of the whole appraisal process is lacking. In view of this, this paper aims at taking some steps to amend this situation. The paper hopefully presents the complexity of the appraisal process. The first part of the process (from the identification of the unit of analysis to the identification of the usable method) is directed at contextualizing and defining the valuation problem. This part of the valuation process leads to the correct definition of the appraisal problem (in terms of unit of valuation, aim and scope, valuation method(s)) and it does not necessarily need, a real time sequence among the activities. The second part of the process (concerning the collection of data and the actual determination of the asset value), however, has to be executed in a sequenced way (even if some feedbacks are presented – Figure 1) and represents the operative phase of the appraisal process. Even if each valuation of the technological asset is unique, this paper aims at providing an analytical framework for estimating the value of a technological asset. As explained in the paper, it is possible to understand that (1) the appraisal process is not simple, but quite multifaceted; and (2) it is not systematic either in the literature or corporate practice.
Served market Price (per unit) Patent’s value
A
F
41,800 units e 50 e 2,090,000
7,150 units e 50 e 357,500
Table IX. The value of the patent
EJIM 8,2
178
This paper analyses the entire process and gives emphasis to the critical aspects of each phase, suggesting some solutions. In brief synthesis, it can be argued that the use of the proposed framework: . forces the appraiser to perform a systematic and rational analysis, coherent with the internal and external context of the valuation; . points out the most critical elements that could lead to a misleading and/or unusable and/or biased valuation; . forces the appraiser to solve some critical trade-offs and to deal with contrasting elements; . imposes coherence throughout the process and consistency among the various hypotheses and assumptions needed to finally identify a (range of) final value(s); . gives the appraiser a communication tool, as different people are involved during the process; . allows people (even if not directly involved in the process) to understand how the value of the asset has been determined and the validity, reliability and precision of the results obtained; and . increases the bargaining power of the appraiser during the negotiation with a potential counterpart, allowing a clear and complete understanding of the value of the asset.
Notes 1. These type of resources will be called “technological assets”. 2. Utility is an economic concept and it means the ability to provide satisfaction. 3. It is assumed that the appraiser is the owner’s intangible. 4. TTO learned, by experience, that licensees are not interested in licensor’s costs. References AAVV (1998), paper presented at the WIPO Regional Seminar on Support Services for Inventors, Valuation and Commercialization of Inventions and Research Results, World Intellectual Property Organization and Technology Application and Promotion Institute, Manila, 19-21 November, available at: www.wipo.int/innovation/en/meetings/1998/inv_mnl/ AAVV (1999), “A core competency approach to valuing intangible assets”, KPMG, Measuring and reporting intellectual capital: experience, issues and prospects – technical meeting, June, available at: www.oecd.org/dataoecd/16/17/1947847.pdf Anson, W. (1996), “Establish market value for brands, trademarks and marketing intangibles”, Business Valuation Review, June, pp. 47-56. Anson, W. (1998), “Identify, value, leverage your intellectual assets”, les Nouvelles, March. Anson, W. (2001), “Traditional valuation methodologies of intellectual property”, The Licensing Journal, September, pp. 30-2. Anson, W. and Serrano, M. (2001), “Intangible asset valuation techniques”, The Licensing Journal, January, pp. 37-8. Arora, A., Fosfuri, A. and Gambardella, A. (2001), “Markets for technology and their implication for corporate strategy”, Industrial and Corporate Change, Vol. 10 No. 2, pp. 416-51.
Atuahene-Gima, K. and Patterson, P. (1993), “Managerial perceptions of technology licensing a san alternative to internal R&D in new product development: an empirical investigation”, R&D Management, Vol. 23 No. 4, pp. 327-36. Benninga, S. and Tolkowsky, E. (2002), “Real options – an introduction and an application to R&D valuation”, Engineering Economist, Vol. 47 No. 2, pp. 151-68. Berkman, M. (2002), “Valuing intellectual property assets for licensing transactions”, The Licensing Journal, Vol. 22 No. 4, pp. 16-23. Bouteiller, C. (2000), “The evaluation of intangibles: advocating for an option based approach”, paper presented at the Alternative Perspectives on Finance and Accounting Conference, Hamburg, 4-6 August. Brooke, M.Z. and Skilbeck, J.M. (1994), Licensing: The International Sale of Patents and Technical Know How, Gower, Aldershot. Brugger, G. (1989), “La valutazione dei beni immateriali legati al marketing e alla tecnologia”, Finanza Marketing Produzione, No. 1, pp. 33-52. Chatterji, D. (1996), “Accessing external sources of technology”, Research and Technology Management, Vol. 39 No. 2, pp. 48-56. Chatterji, D. and Manuel, T. (1993), “Benefiting from external sources of technology”, Research and Technology Management, Vol. 36 No. 6, pp. 21-6. Chiesa, V. (2001), R&D Strategy and Organisation (Managing Technical Change in Dynamic Contexts, Imperial College Press, London. Chiesa, V. and Manzini, R. (1998), “Organising for technological collaborations: a managerial perspective”, R&D Management, Vol. 28 No. 3, pp. 199-212. Damodaran, A. (2001), The Dark Side of Valuation: Valuing Old Tech, New Tech, and New Economy Companies, Pearson Education, Inc, Harlow. Daum, J. (2001), “How to better exploit intangible asset to create value”, The new New Economy Analyst Report, available at: www.juergendaum.com/news/07_06_2001.htm Escher, J.P. (2001), “Process of external technology exploitation as a part of technology marketing: a conceptual framework”, EHT – Center for Enterprise Science, paper presented at the Picmet 2001 Conference, available at: www.tim.ethz.ch/research/ conferences/picmet01/escher/picmet8.pdf Gilardoni, A. (1990), “Il valore del patrimonio tecnologico aziendale nelle prospettive economico-finanziarie e strategico-organizzativa”, Finanza Marketing Produzione, No. 3, pp. 93-110. Gotro, J. (2002), “Unleash your intellectual property potential: in the ‘knowledge economy’, intangible assets such as intellectual property and brand strategies play a key role in determining company value”, CircuiTree, Vol. 15 No. 8, pp. 70-3. Guatri, L. (1989), “Il differenziale fantasma: I beni immateriali nella determinazione del reddito e nella valutazione delle imprese”, Finanza Marketing Produzione, No. 1, pp. 53-62. Hoffman, R. and Smith, R. (2002), “An introduction to valuing intellectual property”, The RMA Journal, Vol. 84 No. 8, p. 44. Holzmann, O.J. (2001), Update: Mergers and Intangible Assets, Wiley, New York, NY. Howells, J. (2000), “Research and technology outsourcing and systems of innovation”, in Metcalfe, J.S. and Miles, I. (Eds), Innovation Systems in the Service Economy, Kluwer Academic Publishers, Boston, MA. Jones, G., Lanctot, A. and Teegen, H. (2000), “Determinants and performance impacts of external technology acquisition”, Journal of Business Venturing, Vol. 16, pp. 255-83.
The valuation of technology
179
EJIM 8,2
180
Khoury, S. (1998), “Valuing intellectual properties”, in Sullivan, P.H. (Ed.), Profiting from Intellectual Capital, Wiley, New York, NY. Khoury, S. (2002), “Valuation of BioPharm intellectual property: focus on research tools and platform technology”, les Nouvelles, June, pp. 48-53. Khoury, S. (2003), “Valuing of technology, technology assessment and valuation of intellectual properties”, Seminar, Milan, 1 April. Khoury, S., Daniele, J. and Germeraad, P. (2001), “Selection and application of intellectual property valuation methods in portfolio management and value extraction”, les Nouvelles, pp. 77-86, September. King, K. (2001), “The value of intellectual property, intangible assets and goodwill”, available at: http://thomsonscientific.com/ipmatters/acctecon/8199544/, Thomson Derwent, Alexandria, VA Kodama, F. (1992), “Technology fusion and the new R&D”, Harvard Business Review, Vol. 70 No. 4, pp. 70-9. Korniczky, S.S. and Stuart, C. III (2002), “IP gains importance in the valuation of company assets”, San Diego Business Journal, Vol. 23 No. 23, pp. A6-A8. Lev, B. (2001), Intangibles. Management, Measurement, and Reporting, Brookings Institution Press, Washington, DC. Mard, M.J. (2000), “Cost approach to valuing intellectual property”, The Licensing Journal, pp. 27-8, August. Mard, M.J. (2001), “Intellectual property valuation challenges”, The Licensing Journal, pp. 26-30, May. Mard, M.J., Hyden, S. and Rigby, J.S. (2000), “Intellectual property valuation”, The Financial Group, April, available at: www.fvginternational.com/library/library_ip.html Martin, M. (1999), “Financial valuation of intellectual property, excerpted from ‘Marketing of advanced materials intellectual property’”, paper presented at the Twelfth International Conference on Composite Materials, Paris, 8 July. Morris, M.R. (2001), “Intangible assets and their role in corporate value”, Value Incorporated, available at: www.valueinc.com/press/Intangible%20Assets.PDF Mullen, M. (1999), “The art of managing intellectual assets”, Financial Focus, October, available at: www.pwcglobal.com/uk/eng/about/svcs/cvc/LECTURE2.doc Mun, J. (2002), Real Options Analysis, Wiley, Hoboken, NJ. Park, Y. and Park, G. (2004), “A new method for technology valuation in monetary value: procedure and application”, Technovation, Vol. 24, pp. 387-94. Pitkethly, R. (1997), “The valuation of patents; a review of patent valuation methods with consideration of option based methods and the potential for further research”, available at: www.oiprc.ox.ac.uk/EJWP0599.html Rabe, J.G. and Reilly, R.F. (1996), “Looking beneath the surface: valuing health care intangible assets”, The National Public Accountant, Vol. 41 No. 3, pp. 14-24. Razgaitis, R. (1999), Early-Stage Technologies, Valuation and Pricing, Wiley, New York, NY. Reilly, R.F. and Schweihs, R.P. (1999), Valuing Intangible Assets, McGraw-Hill, New York, NY. Roberts, E.B. (2001), “Benchmarking global strategic management of technology”, Research Technology Management, Vol. 44 No. 2, pp. 25-36. Roberts, E.B. and Liu, W.K. (2001), “Ally or acquire? How technology leaders decide”, MIT Sloan Management Review, Vol. 43 No. 1, pp. 25-36.
Smith, G.V. and Parr, R.L. (2000), Valuation of Intellectual Property and Intangible Assets, 3rd ed., Wiley, New York, NY. Spadea, C. and Donohue, J.J. (2001), “Business valuation approaches in intellectual property”, Philadelphia Business Journal, Vol. 20 No. 31, p. 11. Stiroh, L.J. and Rapp, R.T. (1998), “Modern methods for the valuation of intellectual property”, Nera Consulting Economists, available at: wwwneracom/Publicationasp?p_ID ¼ 793 Tenenbaum, D. (2002), “Valuing intellectual property assets”, The Computer and Internet Lawyer, Vol. 19 No. 2, pp. 1-7. Further reading Chesbrough, H. and Teece, D.J. (1996), “When is virtual virtuous? Organizing for innovation”, Harvard Business Review, Vol. 74 No. 1, pp. 65-73. Granstrand, O., Bohlin, E., Oskarsson, C. and Sjo¨berg, N. (1992), “External technology acquisition in large multi-technology corporations”, R&D Management, Vol. 22 No. 2, pp. 111-33. Koruna, S. (2001), “External technology commericalization: policy guidelines”, paper presented at the EHT – Center for Enterprise Science, Picmet 2003, available at: www.tim.ethz.ch/ research/conferences/picmet03/koruna/index
The valuation of technology
181
The Emerald Research Register for this journal is available at www.emeraldinsight.com/researchregister
The current issue and full text archive of this journal is available at www.emeraldinsight.com/1460-1060.htm
EJIM 8,2
Dispersed leadership predictor of the work environment for creativity and productivity
182
John D. Politis Higher Colleges of Technology, Dubai Men’s College, Dubai, United Arab Emirates Abstract Purpose – This paper examines the relationship between the dimensions of dispersed – self-management – leadership and a number of work environment dimensions conducive to creativity and productivity. Design/methodology/approach – The study involves a questionnaire-based survey of employees from a high technology organisation operating in the United Arab Emirates (UAE). A total of 104 useable questionnaires were received from employees who are engaged in self-managing activities. These were subjected to a series of correlational and regression analyses. Findings – There are three major findings in this research. First, the relationship between dispersed leadership and the “stimulant” dimensions of the work environment for creativity is positive and significant. Second, the relationship between dispersed leadership, with the exception of encouraging self-reinforcement, and the “obstacle” dimensions of the work environment for creativity is negative and significant. Finally, the findings have clearly shown that the “stimulant” dimensions of the work environment for creativity have a positive and significant impact on both creativity and productivity. Practical implications – The study shows that the role of the leader is to be the provider of a context and situation for creativity and productivity. Thus, the art of leading creative organisations in the UAE is the art of handling people and the task of leadership in such organisations is to provide the people with the work – environmental – conditions under which they can exercise their creativity. Originality/value – The paper clarifies which of the dispersed leadership behaviours best predict the dimensions of the work environment conducive to creativity and productivity. The paper will assist organisations in the UAE in identifying those particular leader behaviours that appear to have an impact on creativity and productivity. Keywords Creative thinking, Self development, Innovation, Leadership, Productivity rate Paper type Research paper
European Journal of Innovation Management Vol. 8 No. 2, 2005 pp. 182-204 q Emerald Group Publishing Limited 1460-1060 DOI 10.1108/14601060510594693
Introduction Creativity and innovation are considered to be key factors for achieving the sustained organisational competitive advantage in the new economy. Therefore, organisations need to continuously adapt, develop, create and innovate (Kay, 1993; Martensen and Dahlgaard, 1999). President Bush (2002) believes that the strength of the US economy is built on the creativity and entrepreneurship of the people. Since it is argued that employees’ creativity makes an important contribution to organisational innovation, effectiveness and survival (Ahmed, 1998; Amabile, 1996; Kanter, 1983), there is a need for organisations to create the organisational contexts that are most supportive to idea generation and creative thinking (Amabile, 1998; Eyton, 1996; Goldsmith, 1996). In other words, for employees to be creative there must be a work environment that supports the process of creativity.
As a result, researchers and practitioners have become increasingly interested in studying the environmental factors (e.g. social, emotional, intellectual development and work conditions) conducive to creativity (Amabile et al., 1996; Oldham and Cummings, 1996; Paulus and Yang, 2000; Shelley and Perry-Smoth, 2000). Theory and research suggest that employees will be creative when they have a shared commitment to their projects (Monge et al., 1992; Payne, 1990), and when they are given adequate resources to conduct their work (Delbecq and Mills, 1985). Other areas of research revealed that employees would be creative when their work is intellectually challenging (Amabile and Gryskiewicz, 1987), and when they are given a high level of autonomy and control over their own work (King and West, 1985). Moreover, the literature reveals that organisational support and evaluation of ideas are necessary in order to support creativity (Cummings, 1965; Kanter, 1983) and that rewards and bonuses are necessary to encourage creativity and support the creative work environment (Amabile et al., 1996). Although the review of the literature suggests that supportive supervision (Oldham and Cummings, 1996) and participative management (Monge et al., 1992) foster creativity, little is known about the effect of dispersed leadership on the work environment dimensions that are most conducive to creativity and productivity. Current research lacks the empirical evidence supporting the relationship between dispersed leadership and the determinants of the creative work environment. In particular, there is an interest from academics and practitioners in addressing whether a dispersed leadership style enhances the work environment dimensions that play a positive role in creativity and productivity. This paper examines the impact of specific dispersed leadership factors on the dimensions of the creative work environment and how these affect creativity and productivity. The study involves a questionnaire-based survey of members of self-managing teams from a high technology organisation that is recognised for its creativity in the United Arab Emirates. Literature review Determinants of the work environment for creativity All innovations begin with creative ideas. In the context of this research, the term “creativity” is defined as the generation of ideas, and innovation is the implementation of these ideas (Amabile et al., 1996). Thus, here employee’s creativity is considered to be the production of ideas, products or procedures that are: novel or original; and potentially useful to the organisation (Amabile, 1996). Much of the research on creativity often ends in the description of personal characteristics of the creative person, such as openness to new experiences, less conventional and conscientious, more self-confident, self-accepting, ambitious, dominant, hostile and impulsive (Barron, 1955; Feist, 1999; MacKinnon, 1962). Feist (1999), for example, suggests that individuals with creative personalities exhibit higher creativity than those with less creative personalities. The importance of very high intelligence, however, is still disputed as the most important characteristic of the creative person (Sternberg, 1999). Other researchers emphasise motivational and social factors as the driving forces behind creativity. Collins and Amabile (1999) argue that people who love their tasks also become creative if they possess knowledge and skills in the domain and a certain degree of openness in thinking. Williams and Young (1999), in their review on
Dispersed leadership predictor 183
EJIM 8,2
184
creativity, concluded that social factors enhance creativity in organisations. Specifically, research in social psychology suggests that supportive behaviour on the part of others in the work place (e.g. co-workers and supervisors) enhances employees’ creativity (Amabile et al., 1996; Oldham and Cummings, 1996; Tierney et al., 1999). In addition, the supportive behaviour of others outside the organisation has an impact on employees’ creativity (Koestmer et al., 1999). Walberg et al. (1980) show that individuals who are highly creative as adults typically received, as children, support from their parents. In relation to supportive behaviour, the literature also suggests that support from both work and non-work sources shapes employee’s moods which, in turn, affect employee’s creativity (George and Brief, 1992). Theoretical work also suggests that when employees experience positive moods, their cognitive or emotional processes are enhanced such that they exhibit high levels of creativity (Isen, 1999). As noted earlier, other areas of research have suggested that employees will be creative when they are given adequate resources to conduct their work (Delbecq and Mills, 1985); when their work is intellectually challenging (Amabile and Gryskiewicz, 1987); and when they are given a high level of autonomy and control over their own work (King and West, 1985). In addition, the literature reveals that organisational support and evaluation of new ideas are necessary in encouraging employees’ creativity (Kanter, 1983). Oldham and Cummings (1996) demonstrate that supportive supervision made a significant contribution to the number of patent disclosures which employees wrote over a two-year period. On the other hand, it has been suggested that there are factors (e.g. internal political problems, conservatism and rigid formal structures) that could impede creativity amongst individuals (Amabile and Gryskiewicz, 1987). In a recent study, Handzic and Chaimungkalanont (2003) found that informal socialisation had a stronger positive effect on creativity than organised socialisation (i.e. based on a rigid formal structure). These findings imply that changes in organisational structure (e.g. from hierarchical to more flat structures) create a positive environment for creativity due to the increased communication between co-workers. From the creativity research described above, it is important to realise that the story of creativity has many paths with no real conclusions. With so many different antecedents of creativity, where should organisations begin? What environmental dimensions are most conducive to employees’ creativity? What are the environmental variables that might influence employees’ creativity in organisations? How can organisations assess the work environment dimensions which play a role in organisational creativity? Amabile et al. (1996) have drawn on the literature of creativity and developed an instrument which assesses the dimensions of the work environment that have been suggested in empirical research and theory as essential for organisational creativity. This instrument is referred to in the literature as KEYS. Eight determinants (dimensions) for creativity in the work environment are measured by KEYS. Of the eight, six are referred to as “stimulant” dimensions and have a positive (þ ) influence on the creative work environment, while the remaining two are referred to as “obstacle” dimensions and have a negative (2 ) effect (Amabile et al., 1996). The eight dimensions are organisational encouragement (þ ); supervisory encouragement (þ ); work group supports (þ ); freedom (þ ); sufficient resources (þ ); challenging work (þ ); workload pressure (2 ); and organisational impediments (2 ). The main areas covered by each determinant of the creative work environment are
shown in Appendix 1. However, these dimensions do not emerge spontaneously or in a vacuum. They evolve out of the context, the social and work conditions of the organisation and their impact is conditioned by the subjective perceptions of creative individuals whose experience is ruled by the history of their work environment. This draws attention, among other things (e.g. support from work and non-work sources, employee moods, individual’s personal characteristics), to the roles played by leadership in developing and linking these perceptions for creativity. The creative problem-solving literature suggests that the creative performance of teams is enhanced by leadership interventions. The literature has indicated that a leadership role of a facilitative kind fosters the generation of new (creative) outputs (Ekvall, 1991; Osborn, 1963; Parnes, 1992). Thus, there must be a dynamic interaction between leadership and creativity in supporting, encouraging and energising the perceptions and the behaviours of employees that influence the creative work environment. Dispersed leadership Leadership is defined broadly as influence processes affecting the choice of objectives of the group or organisation and the perceptions of followers (e.g. creative individuals) (Yukl, 1981). Various theories of leadership have emerged over the past 50 years. The most notable are the classical Ohio studies of initiating structure and consideration (Stogdill, 1974; Stogdill and Coons, 1957); task-orientation and relationship-orientation leadership (Blake and Mouton, 1964); participative leadership (Vroom and Yetton, 1973); and transformational and transactional leadership (Bass, 1985). At approximately the same time as the transformational and transactional theory, a separate leadership approach emerged which focuses on “dispersed leadership” (Manz and Sims, 1987). Dispersed leadership can be illustrated in four sets of writings. The first writing of an emergent dispersed leadership is in Katzenbach and Smith’s (1993, p. 45) book in which they discuss the virtues of “real teams”; that is, teams with “a small number of people with complementary skills who are committed to a common performance purpose, performance goals, and approach for which they hold themselves mutually accountable”. Katzenbach and Smith view the role of the leader of such teams in terms of developing leadership in others by building commitment and confidence, removing obstacles, creating opportunities and being part of the team. Second, Kouzes and Posner (1993) argue that credible leaders develop capacity in others. They “turn their constituents into leaders” (p. 156). Kouzes and Posner view the role of the leader in terms of helping and facilitating followers to use their abilities to lead themselves and others, a view which was supported recently by Jassawalla and Sashittal (2000). The third expression of dispersed leadership can be seen in the suggestion concerning leadership processes and skills, which may or may not reside in formally designated leaders. Hosking (1991) views leadership in terms of “organising” activity. In particular, she identifies networking as an important skill among leaders, in which the cultivation and exercise of wider social influence is the key ingredient. The fourth writing on dispersed leadership occurs in Manz and Sims’s (1986, 1987) self-leadership theory. Manz and Sims develop a theory which specifies the advantage of a type of leadership that is expected to supersede the “visionary hero” image which is a feature of the perception of leaders in the New Leadership tradition. Manz and Sims introduce a style of leadership known as “superleadership”, where followers are stimulated to
Dispersed leadership predictor 185
EJIM 8,2
186
become leaders themselves, a theme that was in fact a feature of Burns (1978) perspective on transforming leadership. In the context of superleadership, the leader is a facilitator who cultivates and motivates followers to develop creative and distinctive talents. Such leadership is known as self-management leadership (Manz and Sims, 1989). In this kind of leadership, leaders are facilitators, not heroes, and they “take inordinate steps to scout for the right mix of talents and coach each team member, . . . they encourage team members to improve their inherent, and necessarily distinctive, talents” (Jassawalla and Sashittal, 2000, p. 39), e.g. “creative talents”. For the purpose of this paper, the fourth writing on dispersed leadership (self-management leadership) that is related to creative environments was employed to predict the determinants of the work environment for creativity and productivity. Self-management leadership. Self-management leadership dimensions were derived from Manz and Sims’ (1986, 1987) theory and research. Their purpose is to measure those specific leadership dimensions that help and encourage employees to develop behaviours for greater autonomy, self-motivation and self-leadership. Manz and Sims (1987) developed the self-management leadership questionnaire (SMLQ) as a measure of such leader dimensions. The six dimensions tapped by the SMLQ are: (1) Encouraging self-observation so that the members of a team can gather the information and the knowledge required in monitoring their performance. (2) Encouraging self-goal setting so that the members of a team set performance goals. (3) Encouraging self-reinforcement so that the members of a team recognise and reinforce their performance. (4) Encouraging self-expectation so that the members of a team have high expectations for performance. (5) Encouraging rehearsal so that the members of a team practise a task before performing it. (6) Encouraging self-criticism so that the members of a team are self-critical and discourage poor performance. A review of the literature suggests that participative leadership fosters creativity (Monge et al., 1992), and employees are more creative when they are given high levels of autonomy (King and West 1985). According to Weiss (2002), “creativity is not the exclusive property of geniuses, but a set of skills and habits anyone can develop. It’s not about where one works, it’s about giving oneself permission to be creative”. It is about management creating a culture which promotes and encourages knowledge sharing (Politis, 2002), creativity and innovation. The rationale of creative leadership then is to promote a positive climate akin to consideration and transformational leadership (Rickards and Moger, 2000). But Manz and Sims’s (1987) scales contain certain themes, such as motivation, trust and respect for people’s ideas and feelings common to those measured by Stogdill’s (1963) consideration leadership and Bass’s (1985) transformational leadership. It is thus reasonable to hypothesise that the factors representing the “stimulant” components of the creative work environment will be more strongly and more positively correlated with the factors of self-management leadership than the factors representing the “obstacle” components of the creative work environment. The assumed connectedness between self-management leadership
and the determinants of the work environment for creativity is expressed in the following hypotheses. H1. Correlations between encouraging self-observation and the “stimulant” determinants of the creative work environment will be stronger and more positive than those with the “obstacle” determinants of the creative work environment. H2. Correlations between encouraging self-goal setting and the “stimulant” determinants of the creative work environment will be stronger and more positive than those with the “obstacle” determinants of the creative work environment. H3. Correlations between encouraging self-reinforcement and the “stimulant” determinants of the creative work environment will be stronger and more positive than those with the “obstacle” determinants of the creative work environment. H4. Correlations between encouraging self-expectation and the “stimulant” determinants of the creative work environment will be stronger and more positive than those with the “obstacle” determinants of the creative work environment. H5. Correlations between encouraging rehearsal and the “stimulant” determinants of the creative work environment will be stronger and more positive than those with the “obstacle” determinants of the creative work environment. H6. Correlations between encouraging self-criticism and the “stimulant” determinants of the creative work environment will be stronger and more positive than those with the “obstacle” determinants of the creative work environment. Work outcomes Work outcomes, or organisational performance, are of considerable importance for quality of life, for national economies and for increasing organisational competitiveness in the rapidly changing global economy. Owing to its importance, the concept of measuring performance has received a great deal of scientific attention in the last 20 years (Cohen and Bailey, 1997). Over those years, the concept of organisational performance has been used to evaluate and compare: (1) different leadership styles (Cohen et al., 1996; Misumi, 1985; Stogdill, 1974); (2) different types of organisational structures (Barefield and Young, 1988; Farris, 1969); (3) different types of manufacturing practices (Hiromoto, 1988; Kaplan, 1990; Young, 1992); (4) the different training and modelling techniques (Bandura, 1977; Manz and Sims, 1981, 1986); and (5) the different theories of motivation, the contributions of individual or organisational groups and a myriad of other social phenomena.
Dispersed leadership predictor 187
EJIM 8,2
188
With so many different approaches to work performance, pinning down what is important to measure in an organisation is a rather difficult task. Amabile et al. (1996) established two dimensions of work outcome, namely creativity and productivity, which fit into the broader framework of assessing the climate for creativity. The items underlying these dimensions serve the function of gauging the respondents’ perceptions of the performance of the work being carried out in their teams. The conceptual model of Amabile et al. (1996, p. 1159) suggests that the “stimulant scales” of the creative work environment will be positively related to creativity and productivity, while the “obstacle scales” will be negatively related. In that regard, it is expected that significant positive correlations will be found between the stimulant determinants of the creative work environment and the factors of creativity and productivity. Moreover, it is reasonable to hypothesise that the obstacle determinants of the creative work environment will be negatively related to creativity and productivity. The hypothesised connectedness between the determinants of the creative work environment and work outcome measures is expressed in the following hypotheses. H7. Work outcome variables (e.g. creativity and productivity) will be positively related with the “stimulant” determinants of the creative work environment. H8. Work outcome variables (e.g. creativity and productivity) will be negatively related with the “obstacle” determinants of the creative work environment.
Subjects and procedure Sample The study focused on a service organisation operating in the United Arab Emirates (UAE), which is recognised for its creativity. Seven departments involved in communications technology participated in the study. All respondents were full-time employees of the participating departments and volunteered to participate in the study. Questionnaires, written in English, containing items measuring creativity, productivity, the determinants of the creative work environment and self-management leadership were distributed to 162 members of self-managing teams in the seven departments. One hundred and four employees returned usable questionnaires, yielding a 64.2 per cent response rate. Most were from the new product development (53 per cent) and customer service (19 per cent) departments. The remainder were spread among various other areas including education/training, consulting, etc. (28 per cent). The majority were within the 21-30 age group (78 per cent). Given the relatively young age of the sample, the level of work experience is accordingly low. Eighty seven per cent of the respondents have had four or fewer years of work experience. The respondents were 5 per cent female and 95 per cent male and all had attained either a technical or university qualification taught in the English language. Procedures Survey questionnaires were pre-tested, using a small number of respondents (about one dozen; the pre-test participants did not participate in the final data collection). As a consequence of the pre-testing, relatively minor modifications were made in the written
instructions and in several of the demographic items. The revised survey was then administered to the organisational respondents in their natural work settings, during normal working hours. Written instructions, along with brief oral presentations, were given to assure the respondents of anonymity protection and to explain (in broad terms) the purpose of the research. The participants were all given the opportunity to ask questions and were encouraged to answer the survey honestly; anonymity was guaranteed and no names or other identifying information were asked. Analytical procedure The analysis of moment structures (AMOS, version 5) software (Arbuckle, 2003) was used for the factor analysis (measurement model) and for the regression analysis (path model). In past work using AMOS, researchers attempting to model relationships among a large number of variables have found it difficult to fit variables into models because there should be at least five cases for each latent variable tested in the model (Bagozzi and Yi, 1988). Therefore, steps were taken to reduce the number of measurements in the theoretical model being presented (Joreskog and Sorbom, 1989). Following the recommendations of Sommer et al. (1995), a measurement model was developed and then, with this held, a structural model. Using confirmatory factor analysis (CFA) the factorial validity of the measurement models was assessed. Given adequate validity coefficients of those measures, the number of indicators in the model was reduced by creating a composite scale for each latent variable (Politis, 2001). Joreskog and Sorbom (1989) showed that it is possible to compute an estimated score (jˆ) for each subject using factor score regression weights (vi), which are given in the output of the structural equation modelling (SEM) statistics program. This is shown in equation (1). X j^i ¼ vi xi ð1Þ where j^i is the estimated score; vi the row vector of factor score regression weights; and xi the column vector of the subject’s observed indicator variables. For example, the composite factor score of “productivity” was created by equation (2). Composite latent factor of productivity
j^ ¼ 0:152 X 13 þ 0:243 X 48 þ 0:216 X 54 þ 0:232 X 65 þ 0:163 X 74
ð2Þ
where X13 is the indicator variable 13, and 0.152 the standardised factor score regression weight of X13; X48 the indicator variable 48, and 0.243 the standardised factor score regression weight of X48; and so on. The reliability alpha (a) for each composite latent variable was computed. Given the reliability estimates, this information was built into the structural model (path) to establish the relationship between the composite latent variables. Munck (1979) showed that it is possible to fix both the regression coefficients (li), which reflect the regression of each composite variable on its latent variable, and the measurement error variances (ui) associated with each composite variable. Munck showed that in situations where the matrix to be analysed is a matrix of correlations among the composite variables, then the parameters of l and u can be computed using equations (3) and (4), respectively. The variances of the composite variables inpthis case are equal to 1. l¼ a ð3Þ
Dispersed leadership predictor 189
EJIM 8,2
190
u¼12a
ð4Þ
However, in situations where the matrix to be analysed is a matrix of covariances amongst the composite variables, then Munck showed that the parameters of l and u can be computed using equations (5) and (6), respectively. p l¼s a ð5Þ
u ¼ s 2 ð1 2 aÞ
ð6Þ
where l is the regression coefficients; u the measurement error variances; a the reliability coefficient for each composite latent variable; s the standard deviation (SD) of composite measure; and, s 2 the variance of composite measure. In the causal modelling, the covariance-based methods are exemplified by software packages such as LISREL, EQS and AMOS. Because AMOS is used in this research, equations (5) and (6) are employed to compute li and ui estimates. In turn, these values are used as fixed parameters in the structural model shown in the simplified path model of Figure 1. Each estimated coefficient is being tested for its statistical significance for the predicted causal relationships. As a test of the measurement and path models, a mixture of fit-indices were employed to assess model fit. The ratio of chi-square to degrees of freedom (x 2 =df) has been computed, with ratios of less than 2.0 indicating a good fit. However, since absolute indices can be adversely affected by sample size (Loehlin, 1992), three other relative indices, the goodness-of-fit index (GFI), the adjusted goodness-of-fit index (AGFI), and the Tucker and Lewis index (TLI) were computed to provide a more robust evaluation of model fit (Tanaka, 1987; Tucker and Lewis, 1973). For GFI, AGFI and TLI, coefficients closer to unity indicate a good fit, with acceptable levels of fit being above 0.90 (Marsh et al., 1988). For root mean square residual (RMR)
Figure 1. Simplified structural (path) model
and root mean square error approximation (RMSEA), evidence of good fit is considered to be values less than 0.05; values from 0.05 to 0.10 are indicative of moderate fit and values greater than 0.10 are taken to be evidence of a poorly fitting model (Browne and Cudeck, 1993). Finally, standardised regression weights given in the AMOS output were used to compute the value of the average variance extracted (AVE) (Fornell and Larker, 1981), with values equal to or greater than 0.5 (AVE $ 0:5) indicating adequate convergent validity (Hair et al., 1995). Results Measurement models The variables measured on the survey are the self-management leadership dimensions; the determinants of the work environment for creativity; and the work outcome measures of creativity and productivity. Independent variables. Self-management leadership measures were assessed using Manz and Sims’s (1987) 22-item SMLQ. The theory posits six dimensions of self-leadership behaviour (e.g. encouraging self-observation, self-goal setting, self-reinforcement, self-expectation, rehearsal and self-criticism). I conducted CFA of all SMLQ items in order to check for construct independence. I first fitted a six-factor model to the data, corresponding to that proposed by Manz and Sims. The fit indices of CFI, AGFI, TLI, RMR and RMSEA were 0.94, 0.98, 0.95, 0.03 and 0.05, respectively, suggesting that this model provides a good fit. Thus, the data supported the independence of six factors, namely, encouraging self-observation (three items, mean ¼ 4:59; a ¼ 0:77), encouraging self-goal setting (four items, mean ¼ 4:45; a ¼ 0:90), encouraging self-reinforcement (four items, mean ¼ 4:57; a ¼ 0:83), encouraging self-expectation (three items, mean ¼ 4:98; a ¼ 0:80), encouraging rehearsal (four items, mean ¼ 4:48; a ¼ 0:78), and encouraging self-criticism (four items, mean ¼ 4:73; a ¼ 0:78). Dependent variables. Determinants of the work environment for creativity are made up of eight subcategories, namely, organisational encouragement, supervisory encouragement, work group supports, freedom, sufficient resources, challenging work, workload pressure and organisational impediments. These categories were assessed using Amabile et al.’s (1996) 66-item instrument (KEYS) (sample items are shown in Appendix 2). I conducted CFA of all KEYS items in order to check for construct independence. I first fitted an eight-factor model to the data, corresponding to that established by Amabile and colleagues. As shown in Table I, for M8 all of the fit indices fall short of the recommended values (CFI ¼ 0:71; AGFI ¼ 0:67; TLI ¼ 0:66; RMR ¼ 0:17; and RMSEA ¼ 0:25), suggesting a poor model fit. Moreover, the AVE fall short of the recommended value of 0.50, indicating poor convergent validity for the eight-factor model. It appears that certain factors should be combined and solutions examined with fewer factors. A series of CFAs were therefore performed by considering a hierarchy of competing models, from a simple null model of zero common factors through to one-, two-, three-, four-, five- and six-factor solutions. Table I reports the chi-square difference test and their associated fit indices. As shown in Table I, the one-factor model (e.g. M2) clearly provided a poor fit: the chi-square (x 2) to degrees of freedom ratio is above 2.0 ðx 2 =df ¼ 3:46Þ; and the values of GFI, AGFI and TLI fall below the recommended value of 0.90, suggesting poor model fit. The RMR and RMSEA values are greater than 0.10 (RMR ¼ 0:12;
Dispersed leadership predictor 191
EJIM 8,2
Competing models
192
M1. Null modela M2. One-factor model M3. Two-factor model M4. Three-factor model M5. Four-factor model M6. Five-factor model M7. Six-factor model M8. Eight-factor modelb M9. Six-factor model modifiedc M10. Six-factor model modifiedd
Table I. Alternative models: chi-square difference test and associated fit indices
x2
x 2/df
r
9971.2 7191.1 3355.7 3336.2 3322.1 3320.1 3307.2 3119.2 1661.9 595.9
4.65 3.46 1.82 1.61 2.60 1.59 1.57 1.50 0.11 0.09
NA 0.000 0.000 0.000 0.000 0.000 0.001 0.001 0.004 0.021
RMR RMSEA
GFI
AGFI
TLI
AVE
0.230 0.123 0.121 0.120 0.119 0.119 0.118 0.173 0.132 0.043
0.158 0.716 0.787 0.789 0.810 0.846 0.862 0.712 0.879 0.921
0.132 0.697 0.699 0.701 0.722 0.730 0.777 0.672 0.827 0.901
NA 0.526 0.580 0.586 0.632 0.677 0.693 0.661 0.866 0.943
NA 0.367 0.372 0.421 0.429 0.467 0.486 0.484 0.572 0.582
0.209 0.156 0.148 0.146 0.144 0.143 0.142 0.252 0.134 .071
Notes: N ¼ 104; aM ¼ Model; NA ¼ not applicable; bEight-factor model proposed by Amabile et al. (1996); cM9 ¼ Six-factor model (M7) with 15 items dropped from the factors of organisational encouragement, supervisory encouragement, and work group supports; dM10 ¼ Six-factor model (M9) with additional 13 items dropped from the factors of freedom, sufficient resources, challenging work, workload pressure and organisational impediments; N ¼ 104
RMSEA ¼ 0:16), taken to be evidence of a poorly fitting model. Thus, the items cannot be subsumed under a single construct reflecting the work environment for creativity. This conclusion is further supported by the much improved fit given by the two-factor solution, as shown by the decrease in chi-square: for M3, the change in chi-square (Dx 2) is equal to 3835.4 ð7191:1 2 3355:7Þ: Moreover, further decrease in chi-square was achieved by moving from a two-factor to a three-factor model (see Table I, M4): for M4, the Dx 2 ¼ 19:5: However, the three-factor model clearly provided a poor fit as the fit indices fall below the recommended values (e.g. GFI ¼ 0:79; AFGI ¼ 0:70; TLI ¼ 0:59; RMR ¼ 0:12; and RMSEA ¼ 0:15). Although further decrease in chi-square was achieved by moving from a three-factor (e.g. M4) to a six-factor model (e.g. M7) – for M7, the Dx 2 ¼ 29:0 – the six-factor model clearly provided a poor fit as all of the fit indices fall short of the recommended values. Examination of the modification indices (MIs) provided by AMOS indicated that certain indicator variables load on more than one factor (cross-loaded). Substantial gains in model fit were obtained when a number of items were dropped from the six-factor model due to cross loading. The alternative six-factor model (e.g. M9) has better fit as the chi-square has substantially dropped ðx 2 ¼ 1661:9Þ; the chi-square to degrees of freedom ratio is below the recommended value of 2.0 ðx 2 =df ¼ 0:11Þ; the value of convergent validity has exceeded the recommended value of 0.5 ðAVE ¼ 0:57Þ; and the values of GFI, AGFI and TLI fall just below the recommended value of 0.90. Further examination of MIs suggested that a number of indicator variables, from the factors of freedom, sufficient resources, challenging work, workload pressure, and organisational impediment, load on more than one factor (cross-loaded) or show poor loading. As shown in Table I, the improvement in goodness of fit which resulted from dropping these indicator variables was substantial. The chi-square to degrees of freedom ratio is below the recommended value of 2.0 ðx 2 =df ¼ 0:09Þ; the value of convergent validity exceeded the recommended value of 0.5 ðAVE ¼ 0:58Þ; the values of GFI, AGFI and TLI exceeded the recommended value of 0.90 (e.g. GFI ¼ 0:92; AGFI ¼ 0:90; TLI ¼ 0:94), and the values of RMR and RMSEA fell below the recommended value of 0.10 (RMR ¼ 0:04; RMSEA ¼ 0:07). Further, the rho (r) value
for this model was 0.02 indicating an adequate level of model fit. Thus, the CFA results of the competing models suggest that it is appropriate to create six separate factors. The first is the factor of “encouragement for creativity” (19 items, mean ¼ 2:70; a ¼ 0:91), which consists of the original factors of organisational encouragement, supervisory encouragement and work group supports, and the factors of freedom (three items, mean ¼ 2:58; a ¼ 0:69); sufficient resources (four items, mean ¼ 2:62; a ¼ 0:73); challenging work (four items, mean ¼ 2:81; a ¼ 0:80); workload pressure (three items, mean ¼ 2:78; a ¼ 0:82); and organisational impediments (five items, mean ¼ 2:76; a ¼ 0:67). It should be noted that 28 items were dropped due to cross loading and/or poor loading; these being of the order of, or less than, 0.10. The outcome of work was assessed using Amabile et al.’s (1996) two work-performance criteria, namely, creativity (six items) and productivity (six items). I conducted CFA of all 12 items in order to check for construct independence. The fit indices of CFI, AGFI, TLI, RMR and RMSEA were 0:98=0:97; 0:96=0:95; 0:93=0:96; 0:03=0:04; and 0:07=0:08; for creativity and productivity respectively, suggesting that it is appropriate to create two separate constructs. These are creativity (five items, mean ¼ 2:70; a ¼ 0:83) and productivity (five items, mean ¼ 2:94; a ¼ 0:71). One item from each construct was dropped due to poor loading, these being of the order of, or less than, 0.09. As a result of the CFAs, the theoretical model to be tested contains six self-management leadership dimensions, the stimulant and obstacle factors of the creativity work environment and the work performance criteria of creativity and productivity, shown in Figure 2.
Dispersed leadership predictor 193
Path modelling Using the analytical procedure outlined above, the computation of the parameters l and u was performed. These parameters are used in the path model. Table II contains the mean, SDs, reliability estimates, the regression coefficient l and measurement error u estimates, and the correlations among the variables tested in this study. Once these parameters are calculated (regression coefficients (ls) which reflect the regression of each composite variable on its latent variable, and the measurement error
Figure 2. Summary of variables used in the paper
Table II. Means, SDs, l and u estimates and bivariate correlations of self-management leadership, work environment for creativity, and work outcomes 1.31 1.49 1.34 1.28 1.29 1.19
0.395 0.222 0.305 .328 0.366 0.312
ud
0.17 0.24 0.07
0.67 0.093 0.49 0.098
0.024 0.26 0.198 0.29 0.076 2 0.16 0.098 0.35 0.127 2 0.09
0.77 b 0.75 0.67 0.72 0.70 0.69
1
0.39 0.076
0.50 0.66 0.45 0.63 0.76
for creativity
1.15 1.41 1.22 1.14 1.14 1.05
lc 3
0.29 0.25 0.14 0.11 0.22
5
0.12 0.22 0.13 0.38 0.05
0.19 0.22 0.03 0.08
0.78
6
7
0.24 0.26 0.15 0.07 0.06 2 0.01
0.65 0.18
9
10
0.55 0.16
0.32
0.82
11
0.67
12
13
14
0.56 0.66 0.02 2 0.08 0.83 0.29 0.17 20.22 0.11 0.03 0.71
2 0.10 0.15
0.69 0.35 0.73 0.53 0.58 0.80 0.13 2 0.22 0.10
8
0.13 2 0.13 11
0.18 0.15 0.91 0.29 0.15 0.50 0.15 0.10 0.71 0.20 0.16 0.70 0.04 2 0.26 2 0.03
0.80 0.66 0.78 0.69 0.64
4
20.08 0.03 2 0.14 0.07
0.11 0.21 0.14 0.15 0.11
0.90 0.69 .83 0.64 0.73 0.76 0.68 0.75 0.64
2
Notes: All correlations above 0.24 are statistically significant, p , 0:01; all correlations between 0.18 and 0.23 are statistically significant, p , 0:05; Seven-point response scale used for self-management leadership, 1 ¼ definitely not true; 7 ¼ definitely true; Four-point response scale used for the perceptions of the creative work environment, 1 ¼ never; 4 ¼ always; Four-point response scale used for creativity and productivity, 1 ¼ never; 4 ¼ always; aN ¼ 104 individuals teams; bCoefficient alphas (as) are shown in bold and are located along the diagonal; p dorganised into 14 self-managing c 2 Regression coefficient, l ¼ s a; Error variance, u ¼ s ð1 2 aÞ
Determinants of the work environment 7. Encouragement for creativity 2.70 0.52 8. Freedom 2.58 0.80 9. Sufficient resources 2.62 0.53 10. Challenging work 2.81 0.70 11. Workload pressure 2.78 0.84 12. Organisational impediments 2.76 0.48 Work outcomes 13. Creativity 2.70 0.74 14. Productivity 2.94 0.58
Self-management leadership Encouraging: 1. Self-observation 4.59 2. Self-goal setting 4.45 3. Self-reinforcement 4.57 4. Self-expectation 4.98 5. Rehearsal 4.48 6. Self-criticism 4.73
Meana SD(s)
194
Latent variable
EJIM 8,2
variances (us) associated with each composite variable), I build this information into the path model to examine the relationships among the latent variables. The model of Figure 3 contains six independent variables, namely, encouraging self-observation, encouraging self-goal setting, encouraging self-reinforcement, encouraging self-expectation, encouraging rehearsal and encouraging self-criticism. It also contains the dependent variables of encouragement for creativity, freedom,
Dispersed leadership predictor 195
Figure 3. Structural estimates for the hypothesised model
EJIM 8,2
196
sufficient resources, challenging work, workload pressure, organisational impediments, creativity and productivity. The analysis revealed that the structural model of Figure 3 fits the data well, with x 2 ¼ 100:1; df ¼ 47; x 2 =df ¼ 2:13; GFI ¼ 0:90; AGFI ¼ 0:88; CFI ¼ 0:93; TLI ¼ 0:91; RMR ¼ 0:059; and RMSEA ¼ 0:09: Figure 3 displays results of hypotheses testing using SEM. Standardised path estimates (gs) are provided to facilitate comparison of regression coefficients. It should be noted that only significant regression coefficients are reported. Six of the eight hypotheses are supported by these data, for at least some dimensions of determinants of the work environment for creativity and work outcome measures. As predicted, encouraging self-observation leadership had a positive effect on three of four stimulant dimensions of the creative work environment, largely supporting H1. Specifically, encouraging self-observation is strongly and positively related to encouragement for creativity ðg1 ¼ 0:36; p , 0:01Þ; freedom ðg2 ¼ 0:42; p , 0:01Þ; and challenging work ðg4 ¼ 0:48; p , 0:01Þ: Contrary to our prediction, encouraging self-observation had a negative effect on sufficient resources ðg3 ¼ 20:20; p , 0:05Þ: Encouraging self-goal setting is related to the obstacle dimension of the creative work environment, providing no support for H2. Specifically, encouraging self-goal setting had a positive effect on workload pressure ðg5 ¼ 0:29; p , 0:10Þ; while the results showed a negative effect on organisational impediments ðg6 ¼ 20:16; p , 0:10Þ: The expected influence of encouraging self-goal setting on the stimulant dimensions of the creativity work environment was not supported by the data of this study. H3 proposed that correlations between encouraging self-reinforcement and the stimulant determinants of the creative work environment will be stronger, and more positive, than those with the obstacle determinants of the creative work environment. This hypothesis was partially supported by the data of this study, in that encouraging self-reinforcement is positively related to encouragement for creativity ðg7 ¼ 0:56; p , 0:001Þ; freedom ðg8 ¼ 0:47; p , 0:01Þ; and workload pressure ðg9 ¼ 0:37; p , 0:01Þ: As predicted, encouraging self-expectation had a strong positive effect on channelling work ðg10 ¼ 0:83; p , 0:001Þ; while the results showed a negative effect on organisational impediments ðg11 ¼ 20:33; p , 0:01Þ; marginally supporting H4. Encouraging self-expectation had no significant paths with any of the other four dimensions of the creative work environment, viz. encouragement for creativity, freedom, sufficient resources and workload pressure. Similarly, encouraging rehearsal had a significant positive effect on freedom ðg12 ¼ 0:51; p , 0:01Þ and a negative effect on workload pressure ðg13 ¼ 20:18; p , 0:10Þ; marginally supporting H5. Contrary to prediction, the results showed that the effect of encouraging self-criticism on workload pressure was negative and significant ðg14 ¼ 20:82; p , 0:001Þ; thus not supporting H6. No other paths were significant between encouraging self-criticism and the dimensions of the creative work environment. On the right side of the model, the results showed that the stimulant determinants of the work environment for creativity were positively and significantly related to the work performance criteria, supporting H7. Specifically, the work outcome dimension of creativity has shown a significant, strong and positive association to encouragement for creativity ðg15 ¼ 0:55; p , 0:001Þ; freedom ðg16 ¼ 0:42; p , 0:001Þ; sufficient
resources ðg17 ¼ 0:43; p , 0:001Þ; and challenging work ðg18 ¼ 0:79; p , 0:001Þ: Moreover, the work outcome dimension of productivity has shown a significant, strong and positive association with encouragement for creativity ðg19 ¼ 0:51; p , 0:001Þ; freedom ðg20 ¼ 0:38; p , 0:01Þ; sufficient resources ðg21 ¼ 0:56; p , 0:001Þ; and challenging work ðg22 ¼ 0:43; p , 0:001Þ: Finally, the obstacle determinant of workload pressure was negatively related to productivity only, while organisational impediments were negatively related to creativity only, partially supporting H8. Specifically, the effect of organisational impediments on the work performance dimension of creativity was negative and significant ðg23 ¼ 20:14; p , 0:05Þ: Similarly, the effect of workload pressure on productivity was negative and significant ðg24 ¼ 20:12; p , 0:10Þ: No other paths were significant between creativity and productivity and the determinants of the work environment for creativity. Furthermore, adding direct paths from self-management leadership to work performance criteria led to a significantly worse model fit. Alternative models were also examined with either paths added, reversed or removed, but all led to significantly worse model fit. Discussion This paper addresses the impact of dispersed – self-management – leadership style on the work environment determinants conducive to creativity and productivity in an organisation, which is recognised for its creativity. The findings are consistent with the context of a participative management style and employees’ creative performance theories. The results of the study indeed reinforce Monge et al.’s (1992) suggestion that a participative style of leadership fosters the determinants for creativity in the work environment. The current study seems to highlight that there are certain behaviours which managers can exhibit to promote employees’ vigilance, independence, work autonomy and creativity. The results of this study suggest that it would be beneficial for managers to encourage employees’ self-observation, self-reinforcement, self-expectation and rehearsal, as these behaviours alone explained over 29 per cent of the variance of the stimulant work dimensions of creativity. In other words, it is the self-management style that supports and encourages the reciprocal relationships among employees that are most important for creativity and the creative culture (Nonaka and Konno, 1988). Specifically, the results show that it is mainly the self-observation and self-reinforcement leadership behaviours that encourage and facilitate the determinants of the work environment for creativity, which in turn are essential for creativity and productivity. It is also important to note that the remaining 71 per cent of the variance is not explained by the variables tested in this study. One could assume that a portion of the remaining variance could be explained by other leadership styles, such as Stogdill’s (1963) consideration leadership and Bass’s (1985) transformational leadership, both of which contain themes common to those measured by Manz and Sims’s (1987) self-management leadership. In addition, another portion of the remaining variance could be explained by other antecedents to creativity, such as the support individuals receive as children from their parents (Walberg et al., 1980); the employees’ mood (Isen, 1999); and the employees’ personality characteristics (Amabile, 1996; Feist, 1999).
Dispersed leadership predictor 197
EJIM 8,2
198
The results have also shown that the variables of encouraging self-goal setting and encouraging self-expectation were negatively associated with the work environment determinant that is linked with internal friction, conservatism and rigid, formal management structures (i.e. organisational impediments). These findings support those of previous studies. In particular, Jones (1996) found that a leader with a hierarchical attitude and behaviour (diametrically opposite to the self-management leader) will create an organisational structure and work environment which reinforce power-based relationships and one-way monologue, thus blocking dialogue, freedom and learning, and hence decreasing creativity and productivity. It is possible that in a work context which requires high levels of creativity (characteristics describing the current sample), leaders with a hierarchical attitude and behaviour are likely to be perceived as a means of external control resulting in decreasing the employees’ intrinsic motivation necessary for creativity (Amabile, 1998). Thus, the art of leading creative organisations is the art of handling people and the task of leadership in such organisations is to provide the people with the work – environmental – conditions under which they can exercise their creativity. One may conclude that the role of the creative leader in an organisation is to be the provider of resources, context and situation for enhancing employees’ creativity. Creative leaders should develop a specific behaviour and character of a supportive, facilitative kind that provides employees with goal clarity, autonomy, freedom, intellectual stimulation and fair evaluation as these are found to be conducive to creativity and productivity. Moreover, the leadership style that recognises and reinforces employees’ performance (e.g. encouraging self-reinforcement) has a positive influence on workload pressure (e.g. urgent work). It is possible that if a manager encourages employees to praise themselves for a job well done (positive self-reinforcement) and the job is urgent, then the employees’ perception of urgency in the work would enhance their intrinsic motivation and creativity (Amabile, 1998). On the other hand, the leadership style that criticises and discourages employees’ poor performance (encouraging self-criticism) has a negative influence on workload pressure. It is possible that self-criticism affects people psychologically by leading them to believe that they cannot perform challenging and excessive or urgent work if the time in which to accomplish the task is perceived to be unrealistically short (Amabile, 1993; Amabile and Gryskiewicz, 1987). Finally, the findings of the study clarify which of the determinants of the work environment for creativity best predict creativity and productivity. In particular, the stimulant determinants of the work environment for creativity were found to be the fundamental levers of both creativity and productivity, while the obstacle determinants impede creativity and productivity. These findings are in alignment with Amabile et al.’s (1996) conceptual model, which predicts positive relations between creativity and the “stimulant scales” of the work environment and negative relations between creativity and the “obstacle scales” (p. 1159). I conclude that leadership in a creative organisation is largely a matter of giving employees resources, creative freedom, enthusiasm, support, a sense of ownership and encouragement. Thus, the role of the leader is to be the provider of a context and situation for creativity and productivity. The art lies in creating an organisational culture which reinforces reciprocally warm relationships and facilitates dialogue, a creative climate and innovativeness.
Limitations and future work While this research has established a clear relationship between self-management leadership and the stimulant factors to creativity, some caution must be exercised when interpreting these findings due to a number of limiting factors. First, although a quantitative study is able to establish a relatively clear picture of relationships between phenomena, it is less apt at explaining the reasons behind these relationships. Thus, future qualitative research needs to be considered to explore the exact reasons why self-management leadership tends to lead to stronger associations with the stimulant determinants of the work environment for creativity than with the obstacle determinants for creativity. Second, although SEM has a number of advantages in testing causal relationships, some caution should be noted. Given the cross-sectional nature of the study, causality cannot be tested directly, although the hypotheses imply causation. Thus, for more definite results future research should be extended and supported by a number of case or longitudinal studies showing the connection among self-management leadership, the dimensions of the creative work environment and work outcomes. Future research should also examine models that accommodate diverse leadership styles, considering, for example, the Ohio studies, the transformational style of Bass’s (1985) studies, and the self-management leadership style of the Manz and Sims’s (1987) studies, the variables of personality characteristics and employee’s attitude. Other limitations include the use of a relatively undeveloped instrument measuring the determinants of the creative work environment (note: 28 items were dropped from the measurement model due to cross or poor loading), inability to establish causality, and the relatively small sample size. References Ahmed, P.K. (1998), “Benchmarking innovation best practices”, Benchmarking for Quality Management and Technology, Vol. 5 No. 1, pp. 48-58. Amabile, T.M. (1993), “Motivational strategy: towards new conceptualisation of intrinsic and extrinsic motivation in the workplace”, Human Resource Management Review, Vol. 3, pp. 185-201. Amabile, T.M. (1996), Creativity in Context, Westview Press, Boulder, CO. Amabile, T.M. (1998), “A model of creativity and innovation in organisations”, in Staw, B.M. and Cummings, L.L. (Eds), Research in Organisational Behaviour, Vol. 10, JAI Press, Greenwich, CT, pp. 123-67. Amabile, T.M. and Gryskiewicz, S.S. (1987), “Creativity in the R&D laboratory”, Technical Report No. 30, Center for Creative Leadership, Greensboro, NC. Amabile, T.M., Conti, R., Coon, H., Lazenby, J. and Herron, M. (1996), “Assessing the work environment for creativity”, Academy of Management Journal, Vol. 39, pp. 1154-84. Arbuckle, J.L. (2003), Analysis of Moment Structures (AMOS), User’s Guide Version 5.0, SmallWaters Corporation, Chicago, IL. Bagozzi, R.R. and Yi, Y. (1988), “On the evaluation of structural equations models”, Journals of the Academy of Marketing Science, Vol. 16, pp. 74-94. Bandura, A. (1977), A Social Learning Theory, Prentice-Hall, Englewood Cliffs, NJ. Barefield, R.M. and Young, S.M. (1988), Internal Auditing in a Just-in-Time Environment, Altamonte Springs, The Institute of Internal Auditors, FL.
Dispersed leadership predictor 199
EJIM 8,2
Barron, F. (1955), “The disposition toward originality”, Journal of Abnormal and Social Psychology, Vol. 51, pp. 478-85. Bass, B.M. (1985), Leadership and Performance Beyond Expectations, Free Press, New York, NY. Blake, R.R. and Mouton, J.S. (1964), The Managerial Grid, Gulf Publishing Company, Houston, TX.
200
Browne, M.W. and Cudeck, R. (1993), “Alternative ways of assessing model fit”, in Bollen, K.A. and Scott Long, J. (Eds), Testing Structural Equations Models, Sage, Newbury Park, CA, pp. 36-62. Burns, J.M. (1978), Leadership, Harper and Row, New York, ny. Bush, G.W. (2002), “A proclamation”, Small Business Week, available at: www.whitehouse.gov/ news/releases/2002/05/20020506-2.html, (accessed 13 April, 2003). Cohen, S.G. and Bailey, D.E. (1997), “What makes teams work: group effectiveness research from the shop floor to the executive suite”, Journal of Management, Vol. 23 No. 3, pp. 239-90. Cohen, S.G., Ledford, G.E. and Spreitzer, G.M. (1996), “A predictive model of self-managing work team effectiveness”, Human Relations, Vol. 49 No. 5, pp. 643-76. Collins, M.A. and Amabile, T.M. (1999), “Motivation and creativity”, in Sternberg, R.J. (Ed.), Handbook for Creativity, Cambridge University Press, Cambridge, MA, pp. 297-312. Cummings, L.L. (1965), “Organisational climates for creativity”, Journal of the Academy of Management, Vol. 3, pp. 220-7. Delbecq, A.L. and Mills, P.K. (1985), “Managerial practices and enhanced innovation”, Organisational Dynamics, Vol. 14 No. 1, pp. 24-34. Ekvall, G. (1991), “The organisational culture of idea management: a creative climate for the management of ideas”, in Henry, J. and Walker, D. (Eds), Managing Innovation, Sage Publications, London, pp. 73-9. Eyton, R. (1996), “Making innovation fly”, Ivey Business Quarterly, Vol. 61 No. 1, p. 59. Farris, G.H. (1969), “Organisation factors and individual performance, a longitudinal study”, Journal of Applied Psychology, Vol. 53, pp. 87-92. Feist, G.J. (1999), “The influence of personality on artistic and scientific creativity”, in Sternberg, R. (Ed.), Handbook of Creativity, Cambridge University Press, Cambridge, MA, pp. 273-96. Fornell, C. and Larker, D.F. (1981), “Evaluating structural equations models with unobserved variables and measurement error”, Journal of Marketing Research, Vol. 18, pp. 39-50. George, J.M. and Brief, A.P. (1992), “Feeling good – doing good: a conceptual analysis of the mood at work-organisational spontaneity relationship”, Psychological Bulletin, Vol. 112, pp. 310-29. Goldsmith, C. (1996), “Overcoming roadblocks to innovation”, Marketing News, Vol. 30 No. 24, p. 4. Hair, J.F., Anderson, R.E., Tathan, R.L. and Black, W.C. (1995), Multivariate Data Analysis with Readings, 4th ed., Prentice Hall, Englewood Cliffs, NJ. Handzic, M. and Chaimungkalanont, M. (2003), “The impact of socialisation on organisational creativity”, Proceedings of the 4th European Conference on Knowledge Management (ECKM 2004), 18-19 September, Oxford University, Oxford, pp. 425-32. Hiromoto, T. (1988), “Another hidden edge – Japanese management accounting”, Harvard Business Review, Vol. 88 No. 4, pp. 22-7. Hosking, D.M. (1991), “Chief executives, organizing process, and skills”, European Journal of Applied Psychology, Vol. 41, pp. 95-103.
Isen, A.M. (1999), “On the relationship between affect and creative problem solving”, in Russ, S. (Ed.), Affect, Creative Experience and Psychological Adjustment, Brunner/Mazel, Philadelphia, PA, pp. 3-17. Jassawalla, A.R. and Sashittal, H.C. (2000), “Strategies of effective new product team leaders”, California Management Review, Vol. 42 No. 2, pp. 34-51. Jones, S. (1996), Developing a Learning Culture, McGraw-Hill, London. Joreskog, K.G. and Sorbom, D. (1989), LISREL 7. A Guide to the Program and Applications, 2nd ed., SPSS, Inc., Chicago, IL. Kanter, R.M. (1983), The Change Masters, Simon and Schuster, New York, NY. Kaplan, R.S. (1990), Measures for Manufacturing Performance, Harvard Business School Press, Cambridge, MA. Katzenbach, J.R. and Smith, D.K. (1993), The Wisdom of Teams: Creating the High-performing Organisation, Harvard Business Scholl, Boston, MA. Kay, J. (1993), Foundations of Corporate Success, Oxford University Press, New York, NY. King, N. and West, M.A. (1985), Experiences of Innovation at Work, SAPU Memo No. 772, University of Sheffield, Sheffield. Koestmer, R., Walker, M. and Fichman, L. (1999), “Childhood parenting experiences and adult creativity”, Journal of Research in Personality, Vol. 33, pp. 92-107. Kouzes, J.M. and Posner, B.Z. (1993), Credibility: How Leaders Gain and Lose It, Why People Demand It, Jossey-Bass, San Francisco, CA. Loehlin, J. (1992), Latent Variables Models, Erlbaum, Hillside, NJ. MacKinnon, D.W. (1962), “The nature and nurture of creative talent”, American Psychologist, Vol. 17, pp. 484-95. Manz, C.C. and Sims, H.P. Jr (1981), “Vicarious learning: the influence of modelling on organisational behaviour”, Academy of Management Review, Vol. 6, pp. 105-13. Manz, C.C. and Sims, H.P. Jr (1986), “Beyond imitation: complex behaviour and affective linkages resulting from exposure to leadership training models”, Journal of Applied Psychology, Vol. 71 No. 4, pp. 571-8. Manz, C.C. and Sims, H.P. Jr (1987), “Leading workers to lead themselves. The external leadership of self-managing work teams”, Administrative Science Quarterly, Vol. 32, pp. 106-29. Manz, C.C. and Sims, H.P. Jr (1989), Superleadership: Leading Others to Lead Themselves, Prentice-Hall, Englewood Cliffs, NJ. Marsh, H.W., Balla, J.R. and McDonald, R.P. (1988), “Goodness-of-fit indexes in confirmatory factor analysis: the effect of sample size”, Psychological Bulletin, Vol. 103 No. 3, pp. 391-410. Martensen, A. and Dahlgaard, J.J. (1999), “Strategy and planning for innovation management – supported by creative and learning organisations”, International Journal of Quality and Reliability Management, Vol. 16 No. 9, pp. 878-91. Misumi, J. (1985), The Behavioural Science of Leadership. An Interdisciplinary Japanese Research Program, University of Michigan Press, Ann Arbor, MI. Monge, P.R., Cozzens, M.D. and Contractor, N.S. (1992), “Communication and motivational predictors of the dynamics of organisational innovation”, Organisational Science, Vol. 3, pp. 250-74. Munck, I.M.E. (1979), Model Building in Comparative Education. Applications of the LISREL Method to Cross-national Survey Data, International Association for the Evaluation Achievement Monograph Series No. 10, Almqvist and Wiksell, Stockholm.
Dispersed leadership predictor 201
EJIM 8,2
202
Nonaka, I. and Konno, N. (1988), “The concept of Ba: building a foundation for knowledge creation”, California Management Review, Vol. 40 No. 3, pp. 40-54. Oldham, G.R. and Cummings, A. (1996), “Employee creativity: personal and contextual factors at work”, Academy of Management Journal, Vol. 39, pp. 607-34. Osborn, A.F. (1963), Applied Imagination, 3rd ed., Harper, New York, NY. Parnes, S.J. (1992), Sourcebook for Creative Problem-Solving, Creativity Education Foundation Press, Buffalo, NY. Paulus, P.B. and Yang, H.C. (2000), “Idea generation in groups: a basis for creativity in organisations”, Organisational Behaviour and Human Decision Processes, Vol. 82 No. 1, pp. 76-87. Payne, R. (1990), “The effectiveness of research teams: a review”, in West, M.A. and Farr, J.L. (Eds), Innovation and Creativity at Work, Wiley, Chichester, pp. 101-22. Politis, J.D. (2001), “The relationship of various leadership styles to knowledge management”, The Leadership and Organisational Development Journal, Vol. 22 No. 8, pp. 354-64. Politis, J.D. (2002), “Transformational and transactional leadership enabling (disabling) knowledge acquisition of self-managed teams: the consequences for performance”, The Leadership and Organisational Development Journal, Vol. 23 No. 4, pp. 186-97. Rickards, T. and Moger, S. (2000), “Creative leadership processes in project team development: an alternative to Tuckman’s stage model”, British Journal of Management, Vol. 11, pp. 273-83. Shelley, C.E. and Perry-Smoth, J.E. (2000), “Effects of social-psychological factors on creative performance: the role of informational and controlling expected evaluation and modelling experience”, Organisational Behaviour and Human Decision Processes, Vol. 84 No. 1, pp. 1-22. Sommer, S., Bae, S-H. and Luthans, F. (1995), “The structure-climate relationship in Korean organisations”, Asia Pacific Journal of Management, Vol. 12 No. 2, pp. 23-36. Sternberg, R.J. (1999), Handbook for Creativity, Cambridge University Press, Cambridge, MA. Stogdill, R.M. (1963), Manual for the Leader Behaviour Description Questionnaire – Form XII, Bureau of Business Research, Ohio State University, Columbus, OH. Stogdill, R.M. (1974), Handbook of Leadership: A Survey of the Literature, Free Press, New York, NY. Stogdill, R.M. and Coons, A.E. (1957), Leader Behaviour: Its Description and Measurement, Bureau of Business Research, Ohio State University, Columbus, OH. Tanaka, J.S. (1987), “How big is enough? Sample size and goodness-of fit in structural equations models with latent variables”, Child Development, Vol. 58, pp. 134-46. Tierney, P., Farmer, S.M. and Graen, G.B. (1999), “An examination of leadership and employee creativity: the relevance of traits and relationships”, Personnel Psychology, Vol. 52, pp. 591-620. Tucker, L.R. and Lewis, C. (1973), “The reliability coefficient for maximum likelihood factor analysis”, Psychometrika, Vol. 38, pp. 1-10. Vroom, V.H. and Yetton, P.W. (1973), Leadership and Decision Making, University of Pittsburgh Press, Pittsburgh, PA. Walberg, H.J., Rasher, S.P. and Parkerson, J. (1980), “Childhood and eminence”, Journal of Creative Behaviour, Vol. 13, pp. 225-31. Weiss, J. (2002), Creativity, available at: www.joyceweiss.com/creativity.html Williams, W.M. and Young, T.L. (1999), “Organisational creativity”, in Sternberg, R.J. (Ed.), Handbook of Creativity, Cambridge University Press, Cambridge, MA, pp. 373-91.
Young, S.M. (1992), “A framework for successful adoption and performance of Japanese manufacturing practices in the United States”, Academy of Management Review, Vol. 17 No. 4, pp. 677-700. Yukl, G.A. (1981), Leadership in Organisations, Prentice-Hall, Englewood Cliffs, NJ. Further reading Frese, M., Teng, E. and Wijnen, C.J. (1999), “Helping to improve suggestion systems: predictors of making suggestions in companies”, Journal of Organisational Behaviour, Vol. 20, pp. 1139-55. Manz, C.C. (1986), “Self-leadership: Toward an expanded theory of self-influence processes in organisations”, Academy of Management Review, Vol. 11, pp. 585-600. Manz, C.C. and Sims, H.P. Jr (1993), Business without Bosses: How Self-managing Teams are Building High Performance Companies, Wiley, New York, NY.
Dispersed leadership predictor 203
Appendix 1.
Figure AI. Main areas of each determinant of the creative work environment
EJIM 8,2
204
Appendix 2. Sample items for measuring the determinants of the creative work environment The statements of the instrument were developed by Professor Teresa M. Amabile and can be purchased from the Centre for Creative Leadership, PO Box 26300, Greensboro, North Carolina 27438. (1) Organisational encouragement – People are recognized to solve problems creatively in this organisation. (2) Supervisory encouragement – My supervisor serves as a good work model. (3) Work group support – There is free and open communication within my work group. (4) Freedom – I have the freedom to decide how I am going to carry out my projects. (5) Sufficient resources – Generally, I can get the resources I need for my work. (6) Challenging work – I feel challenged by the work I am currently doing. (7) Workload pressure – I have too much work to do in too little time. (8) Organisational impediments – There are many political problems in this organisation. Source: Adapted from Amabile et al. (1996, p. 1166)
The Emerald Research Register for this journal is available at www.emeraldinsight.com/researchregister
The current issue and full text archive of this journal is available at www.emeraldinsight.com/1460-1060.htm
Configured for innovation: the case of palliative care
Configured for innovation
Graydon Davison College of Law and Business, School of Management, University of Western Sydney, Penrith South DC, New South Wales, Australia
205
Abstract Purpose – To begin a process of understanding how palliative care organisations are configured to enable innovative multidisciplinary patient care teams and their management in an uncertain, complex and dynamic environment. Design/methodology/approach – A range of literature was reviewed to suggest configuration and characteristics that were tested using semi-structured interviews with the senior medical staff member at each of three Australian case study organisations. Data gathered from these interviews was supplemented with data gathered from semi-structured interviews with multidisciplinary management teams and patient care teams dealing with inpatients and home-care patients. Findings – A hybrid configuration is suggested, based on Mintzberg’s typology of organisations. Responses from interviews modify some characteristics of the suggested configuration, though generally appearing to support it. Characteristics of the external and internal environments are described. Research limitations/implications – Palliative care is rarely written off outside the healthcare literature and comparatively infrequently within it. Configuration is used to suggest the characteristics of innovative teams in an uncertain, dynamic, complex environment. The use and management of multidisciplinary patient care teams in palliative care offers interesting insights for a broad range of organisations. Practical implications – A contribution to the discourse on the relationship between configuration and innovation based in organisations without commercial imperative, delivering multi-level care for and by people involved in the end-of-life process. Originality/value – The paper continues a line of publications, beginning in 2002, describing the management of innovation in multidisciplinary palliative care teams. The originality and value of this paper and this line of research is in taking a management view of a unique environment that offers insights and lessons to a broad range of organisations. Keywords Organizational structures, Innovation, Australia, Team working, Health and medicine Paper type Research paper
Introduction This paper reports an aspect of research in Australia into the management of multidisciplinary patient care teams in palliative care organisations; uncertain, complex and dynamic care environments. In these organisations patient care teams appear to innovate in response to the ad hoc changing of patient requirements during the end-of-life process. Mixing teams that are relevant to patient requirements sometimes occurs as a result of informal communications and without recourse to formal team or scheduling meetings. It also combines professions that in hospitals would reflect a strict hierarchy and places them in contributory situations with an emphasis on the relief of distress in the patient and anybody accompanying the patient, not the hierarchy. In these situations team leadership can depend on which professional is best equipped to deal with the situation “now”. This creates an
European Journal of Innovation Management Vol. 8 No. 2, 2005 pp. 205-226 q Emerald Group Publishing Limited 1460-1060 DOI 10.1108/14601060510594729
EJIM 8,2
interesting tension in medico-legal terms as it is generally the doctor allocated to the patient who will be seen to have responsibility for the patient. The use sought from this description of configuration of palliative care organisations was to begin to understand the characteristics of these organisations as they relate to the operation of multidisciplinary patient care teams, to “search for configuration itself: for complex systems of interdependency and their orchestrating themes” (Miller, 1999, p. 27).
206 Methodology A wide range of literature was reviewed to provide a theoretical basis for the establishment of a configuration framework that was then tested using semi-structured interviews with a senior medical staff member at each of three Australian case study organisations; a Staff Specialist and two Directors of Palliative Care. Interviews were conducted on an individual basis and over a period of one hour. Each interview tested a combination of characteristics suggested by the literature as relevant to the suggested configuration. Additional data regarding a small number of characteristics was derived from semi-structured interviews conducted with multidisciplinary management teams and multidisciplinary patient care teams in the three case study organisations, regarding the topics of organisational capabilities, organisational levers and individual behaviours within the patient care teams. This involved 11 interviews, each of approximately one hour’s duration. The number of participants involved in these interviews ranged from 20 to 7, depending on the type of team involved and the case study organisation. Teams were interviewed as teams. All interviews were transcribed by the interviewer within 24 h of completion. In all, data for this paper was gathered at 14 interviews over a period of some 18 months. Innovation The discourse on innovation is frequently economically based. Writing on the need for a framework for understanding company development Nystro¨m (1980, p. 1) noted that “Few areas of economic debate are characterized by as much agreement as the role of innovation for economic development” and defines innovation as “radical, discontinuous change”. Moss Kanter (1984, p. 20) noted that “Ideas for reorganizing, cutting costs, putting in new budgeting systems, improving communication, or assembling products in teams are also innovations.” Scherer (1984, p. 8) also wrote on this theme, specifically quoting Schumpeter’s (1934) definition of innovation, “the carrying out of new combinations” in the context of technological innovation. The act of innovating was often referred to in terms of new ideas, of commercialising one or more ideas so that they can be exchanged for something of economic or competitive value (Ahmed, 1998). Within this economically based discourse innovation was referred to in a number of different ways. Zaltman et al. (1973, p. 14) discussed earlier references to innovation and established a social theme, noting that, “the distinguishing characteristic of an innovation is that instead of being an external object, it is the perception of a social unit that decides its newness”. Burns and Stalker (1979) described the organisation and management of innovation as the result of a number of social processes within organisations and Drucker (1985, p. 67) described innovation as “the effort to create purposeful, focused change in an enterprise’s economic or social potential”.
Within the healthcare literature innovation is generally referred to in terms of a small number of broad fields: healthcare technology (Moskowitz, 1999; Wyke, 1994); clinical practice and nursing practice (Forchuk and Dorsay, 1995; Tolson, 1999); and the management of healthcare bureaucracies and institutions (Fottler, 1996; Glouberman and Mintzberg, 2001a). In this paper innovation is viewed as defined in terms of exchanging ideas for the creation of a non-commercial value directly related to the care and wellbeing of people. The definition of innovation used in this paper then becomes, . . . the effort to create purposeful, focused change in an enterprise’s social potential (after Drucker, 1985).
Drucker (1985, p. 72) also wrote, “But when all is said and done, what innovation requires is hard, focused, purposeful work” that required “diligence, persistence, and commitment”. While it may be said that these are characteristics found in a number of workplaces they are the key, almost overpowering, characteristics of multidisciplinary patient care teams and team members in palliative care. Palliative care Palliative care is an environment where multi-profession teams work collegiately with patients who are dying and with the patient-based carers who support them so that the primary issue becomes and remains patient comfort (Meyers, 1997). Palliative care is delivered by multidisciplinary teams (McDonald and Krauser, 1996) that comprise a number of disciplines including nursing, medicine, pharmacology, physiotherapy, occupational therapy, social work, spiritual care, grief counselling and administration. In this environment people are the centre, not diseases, and care results from the understanding of the causes of distress (Barbato, 1999). Successful provision of palliative care is dependent upon understanding the causes of distress, whether the cause is physical, emotional or spiritual; known or unknown (McDonald and Krauser, 1996; Higginson, 1999; Witt Sherman, 1999). The patient’s end-of-life state and central role in efforts to manage that state makes the patient a participatory member of the palliative care team who maintains a level of autonomy and control in relation to the other team members (McDonald and Krauser, 1996, McGrath, 1998). According to Lazarus and Folkman (1984) uncertainty, as it is considered in the social sciences, can be said to fall into two categories, event-based and temporally based; uncertainty about what will happen and what the results will be and uncertainty about when it will happen and how long it will take. Both types of uncertainty are capable of generating confusion and helplessness, particularly in cases of physical illness and disability. Uncertainty is also capable of immobilising anticipatory coping and, therefore, the necessary decision making for dealing with the uncertainty being faced. At the end of life, changes occur at multiple levels, sometimes in parallel, without obvious causes, without notice and without clear causal linkages between change and effect. Uncertainty pervades the palliative care environment. The trajectory of disease is uncertain (Rose, 1999). Symptoms, for example pain, are not necessarily linked to obvious causes (Lewis et al., 1997). Reactions of patients and patient-based carers to the end-of-life process are uncertain (Pierce, 1999). The reactions of palliative care professionals to the situations that they encounter during the end-of-life process of
Configured for innovation
207
EJIM 8,2
208
those in their care can vary (McDonald and Krauser, 1996). The required level of extension of the palliative care service to individuals and groups who accompany the patient is uncertain (Lewis et al., 1997). In addition, the range of palliation requirements, driven at the conscious and unconscious levels, varies as does the depth of experience of each patient (Kearney, 1992). As the majority source of uncertainty is the patient this means that the patient becomes the major informant of situational change (Henkelman and Dalinis, 1998). This makes palliative care professionals dependent on each patient’s ability to explain what is changing, when and at what level and requires that the professionals be able to enable and understand that explanation. The use of multidisciplinary teams is a response to the levels of uncertainty noted above and to the range of palliation requirements that could be necessary for any given patient (McDonald and Krauser, 1996). The arrival of a patient at an end-of-life experience requiring palliative care brings the certainty that life will end, generally within a relatively short period of time. This single fact aside, uncertainty is the basis of the end-of-life experience (Davison and Hyland, 2003). In addition to this each patient is experiencing the end of life on two distinct levels, the conscious and the unconscious, and the depth of the experience at each level varies from patient to patient (Kearney, 1992). Palliative care is an uncertain, dynamic environment with a certain conclusion. Prior to arriving at that certain conclusion it is the uncertainty that directs all attempts to provide care. For the professions involved, this creates a working environment requiring ongoing work-based learning, governed by an uncertain direction of care that must follow a trajectory of need, of which the patient is the major informant (Henkelman and Dalinis, 1998). The palliative care environment is one of multi-causal uncertainty. This is addressed with individualised care for patients and their personally based support systems, using cross-functional, collaborative, multidisciplinary teams that include the patient and patient-based carers.
Configuration In seeking to describe how palliative care organisations enable their teams to undertake multidisciplinary work in this uncertain environment the Australian research has described organisational capabilities for palliative care (Davison and Hyland, 2003) and tested at interview the individual behaviours within multidisciplinary palliative care teams (Davison and Sloan, 2002), the levers used in managing these teams (Davison, 2003) and aspects of knowledge management within the teams (Davison, 2004). However, a way of understanding what it is that enables the appropriate alignment of resources and other organisational elements so that organisational capabilities are applied, management levers used and team behaviours applied and knowledge generated and managed; all at the right time and place in an uncertain, dynamic and complex environment, was now sought. Given the holistic and multi-dimensional nature of palliative care, the literature on configuration was considered a useful starting point for gaining this understanding, configuration being an approach described as “. . .taking an holistic view attempting to synthesize rather than analyse the information gathered about the organisation” by Duberly and Burns (1993, p. 26).
Miller (1999, p. 29) describes configurations as “complex systems of interdependency brought about by central orchestrating themes” that “. . .at their most useful represent common, thematically driven alignments of elements or dimensions” (p. 28). Gaining an understanding of these elements and alignments can help in understanding the organisation. The Australian research at the time of writing and the palliative care literature indicate a number of central organising themes, primary among these are the centrality of the patient (Barbato, 1999), uncertainty (Lewis et al., 1997; Pierce, 1999), multidisciplinary operations (Davison and Hyland, 2003), delivering situationally specific multidisciplinary care (Wickramasinghe and Davison, 2004) and the situationally based generation, transfer and management of implicit knowledge (Davison, 2004). These seem to also meet Miller’s (1987, p. 686) description of configuration-related imperatives, those things that can, . . . drive or organize many elements of a configuration, are the most resistant to change, and probably must change before most meaningful transformations take place.
Meyer et al. (1993, p. 1178) note that, . . .configurational enquiry represents a holistic stance, an assertion that the parts of a social entity take their meaning from the whole and cannot be understood in isolation. Rather than trying to explain how order is designed into the parts of an organization, configurational theorists try to explain how order emerges from the interaction of those parts of the whole.
The fundamental ethos of palliative care, the patient as a whole and as member of a system (McDonald and Krauser, 1996; Higginson, 1999; Witt Sherman, 1999) is clearly reflected here. This ethos demands coherence in approach and high levels of adaptability and innovation in practice. A characteristic of these practices is the hybridisation of patient typologies as members of multiple disciplines work together to form a situationally based picture of the causes of distress then attempt to provide relief. Suggesting configuration for palliative care Mintzberg was chosen as a useful source of information for the research for four reasons: (1) Mintzberg is a credible source of theory and cases on the management of organisations; (2) among Mintzberg’s work on the management of organisations is a small body of work on hospital management (Mintzberg, 1997; Glouberman and Mintzberg, 2001a, b) and on collaborative approaches (Mintzberg et al., 1996); (3) among his work on hospital management Mintzberg has transitioned his work on organisational configurations (Mintzberg, 1989) to hospitals (Glouberman and Mintzberg, 2001b); and (4) in addition, Mintzberg is among the authors contributing to the theory and application of configuration as a concept for the study of organisations (Mintzberg, 1997) and is often cited as such (Miller and Friesen, 1984; Ostroff and Schmitt, 1993; Meyer et al., 1993; Miller, 1999). Each of Mintzberg’s (1989) organisational configurations was compared to the palliative care literature reviewed for the Australian research. When indicated as
Configured for innovation
209
EJIM 8,2
210
appropriate Mintzberg (1989) was also compared to the healthcare management literature. The result was a theoretical configuration for palliative care that was then tested at interview in the three case study organisations participating in the Australian research. Mintzberg (1989) describes seven basic organisational configurations: Entrepreneurial; Machine; Professional; Diversified; Innovative; Missionary; and Political. Nothing approaching Mintzberg’s (1989) Entrepreneurial organisation configuration appeared in the palliative care literature reviewed. Entrepreneurial organisations commonly exist in dynamic, relatively simple environments, with power centralised in one individual at the top of the organisation. They have few staff, little formalised activity and make “little use of planning procedures or training routines” (Mintzberg, 1989, p. 115). By contrast, palliative care organisations operate in dynamic and uncertain environments (Henkelman and Dalinis, 1998; Pierce, 1999; McDonald and Krauser, 1996). Power is distributed among patient carers (McDonald and Krauser, 1996), formal training is evident (Witt Sherman, 1999) and, given the widespread use of professionals and the nature of palliative care itself, much activity is formalised (Lewis et al., 1997; Rasmussen and Sandman, 1998). Mintzberg’s (1989) Machine organisation configuration was also not reflected in the palliative care literature reviewed. Machine organisations offer little discretion in decision making, where palliative care organisations utilise decision making in distributed multidisciplinary teams (McDonald and Krauser, 1996; Witt Sherman, 1999). Machine organisations exist in a relatively simple and stable environment and palliative care organisations work in and with environments that are dynamic and uncertain (Henkelman and Dalinis, 1998; Pierce, 1999; McDonald and Krauser, 1996). However, Mintzberg’s (1989) Machine configuration appeared to have some relevance to the healthcare management literature and this is addressed in the following section of this paper. According to Mintzberg (1989) the Professional organisation is found in complex, relatively stable environments that require processes that must be learnt over long periods and can produce standard outcomes, although the processes themselves are often too complex to be standardised in their application. The professionals within these organisations derive their authority from their expertise and have discretion available in the application of their skills and knowledge. Coordination of effort can be tight within professional disciplines but is not so good between the disciplines because of innate rivalries between professions. As noted above, palliative care organisations operate in dynamic, relatively complex environments. They contain mixtures of clinical professionals. The focus of palliative care organisations and all members of those organisations is singularly the active delivery of multilevel care to improve the quality of life for people who are dying and to support relatives and friends as they transit the end-of-life experience (McDonald and Krauser, 1996; Bottorff et al., 1998). Palliative care organisations use multidisciplinary teams to understand manifold causes of distress (Barbato, 1999; Meyers, 1997; McDonald and Krauser, 1996; Higginson, 1999; Witt Sherman, 1999). The employment of a primarily professional workforce, carrying out complex work that is controlled by the professionals gave the appearance of Minbtzberg’s (1989) professional organisation. However, the multidisciplinary nature of the team operations and the good communications and
information exchanges between the disciplines precluded a substantial fit between the suggested type and the palliative care literature. With regard to Mintzberg’s (1989) Diversified organisation configuration, once again there did not seem to be a parallel in the palliative care literature reviewed. Diversified organisations were described as “a set of semi-autonomous units coupled together by a central administrative structure. The units are generally called divisions and the central administration, the headquarters.” (Mintzberg, 1989, p. 155). Divisions were self-sustaining entities with their own operational goals. This configuration did not fit with the palliative care literature reviewed. The literature reported holistic organisations (McGrath, 1998), using multidisciplinary teams operating across what would often be called discipline-based boundaries (McDonald and Krauser, 1996; Lewis et al., 1997; Rose, 1997) and where the organisation and each team shared the same operational and organisational goals. As for Innovative organisations, according to Mintzberg (1989, p. 199), “Sophisticated innovation requires a very different configuration, one that is able to fuse experts drawn from different disciplines into smoothly functioning ad hoc project teams.” Innovative organisations were found in complex, relatively dynamic environments where the requirement was for flexibility in structure so that different forms of expertise could be drawn together quickly to address problems and situations directly. These organisations employed people with high levels of knowledge and skill and used these as a foundation for the ongoing development of skills and knowledge relevant to the work. The use of multidisciplinary teams in the complex, dynamic environment of palliative care, where it is common to quickly deploy mixed groups of professionals in response to particular situations, was reminiscent of Mintzberg’s (1989) Innovative organisation. Palliative care organisations work with persistent uncertainty, driven by factors surrounding the central focus of their work, ethics and philosophy, the patient (Henkelman and Dalinis, 1998; Pierce, 1999; Lewis et al., 1997; Higginson, 1999). In palliative care, decision making is at times decentralised to individual patient care teams and these teams include any person relevant and available to assist in fulfilling the patient’s needs (McDonald and Krauser, 1996). This includes family and friends of the patient (Lewis et al., 1997; Rose, 1997). The need to address the patient’s situation on more than one level, for example clinically, socially and consciously, and to frequently reassess the situation (Rose, 1995) means that patient care team membership must also be reassessed as frequently and changed when necessary. The degree of fit between Mintzberg’s (1989) Innovative configuration and the palliative care literature was considered substantial. The Missionary organisation was described as having, “a very special culture – a richly developed and deeply rooted system of values and beliefs that distinguishes a particular organization from all others.” (Mintzberg, 1989, p. 221). Within these organisations, the identification between the organisation and the people who work there was, according to Mintzberg (1989), so strong that it could be used as a mechanism for coordinating activities, in place of the direct supervision that is found in machine organisations for example. The organisation’s mission was paramount here. Three characteristics of palliative care organisations suggested that the configuration for Mintzberg’s (1989) Missionary organisation was perhaps appropriate:
Configured for innovation
211
EJIM 8,2
212
(1) the singular focus of palliative care organisations (McDonald and Krauser, 1996); (2) the distinctive nature of palliative care and palliative carers, who involve themselves in an holistic care that attempts to, on as many levels as possible, return control to the patient (McGrath, 1998); and (3) the unique niche that palliative care occupies within healthcare systems (McGrath, 1998; Higginson, 1999). These factors indicated that the degree of fit between the suggested configuration and the palliative care literature was substantial. In Mintzberg’s (1989) Political organisation configuration he noted that all organisations contained conflict and therefore politics. This was followed by opinions about the likelihood of the level of politics being quite high in professional and innovative organisations, because of the distribution of power that is based in professional expertise rather than the authority of management. This, accordingly, indicated that this configuration could be considered as likely in palliative care organisations. The idea was supported, at least in terms of a climate that may encourage politicisation, in some parts of the palliative care literature reviewed. McGrath (1998) described experiences of conflict between a hospice and its healthcare bureaucracies during hospice establishment, caused by conflicting views of purpose. In describing barriers to competent palliative care in the United States, Henkelman and Dalinis (1998) noted that a politicised internal environment could be created by external contingencies, for example the medication of terminally ill patients and the perception of a hastened death, and provide ongoing uncertainties for palliative carers. Whether or not this politicised environment would maintain itself without external influences was not indicated in the palliative care literature. The palliative care literature did not report, apart from the examples given here, a politicised environment. The result here was that the degree of fit between the Political organisational configuration and the palliative care literature was minimal. The comparison of the palliative care literature and Mintzberg’s (1989) organisational configurations produced some degrees of fit between the two. From the comparison conducted it was possible to draw a conclusion that the configuration of palliative care organisations could be expected to be a hybrid of Mintzberg’s (1989) configurations, as seen in Table I. Figure 1 reflects the suggested configuration.
Table I. Suggested fit
Configuration
Fit
Degree
Entrepreneurial Machine Professional Diversified Innovative Missionary Political
No No Yes No Yes Yes Yes
Nil Nil Moderate Nil Substantial Substantial Minimal
Configured for innovation
213 Figure 1. Suggested configuration
The healthcare environment and the suggested organisational configuration While it was not the purpose of this paper to investigate the healthcare management environment there is an interface between this and palliative care organisations. This being the case, and bearing in mind the ability of organisations to be shaped by their environments (Mintzberg, 1989; Burns, 1963; Lawrence and Lorsch, 1986), it is important to have an understanding of the healthcare environment. Fortunately this understanding can also be expressed in terms of Mintzberg’s (1989) typology of organisational configurations, specifically machine organisations. There are aspects of the description of Machine organisations that are reflected in the healthcare management literature. Mintzberg (1989) described Machine organisations as structured for control, generally found in simple and relatively stable environments, with large operating units. Among the examples offered were government organisations needing to demonstrate a regulatory framework internally and externally, and the regulators themselves. In Australia both of these examples match the publicly funded healthcare bureaucracies (New South Wales Health Department, 1999; New South Wales Health Council, 2000). However, the healthcare literature reviewed for the research described an environment that is neither simple nor stable. Rather, the literature described an environment in change being driven by increasing patient demands on the quality and availability of healthcare and rising healthcare costs (New South Wales Health Department, 1999; New South Wales Health Council, 2000). Healthcare management roles and delivery systems are changing (McConnell, 1996), requiring changes to healthcare delivery capabilities (Heller et al., 2000) and paradigms (Henderson, 1995). This environment did not at first seem to match Mintzberg’s (1989) requirement for a Machine organisation. However, Mintzberg (1989) also noted that machine organisations can be capable of stabilising their environment. Publicly and privately funded healthcare bureaucracies described in the literature seem to be attempting to do exactly this with three broad strategies. These are generally headed clinical governance (Wright et al., 1999; Firth-Cozens, 1999), evidence-based decision making (Cowling et al., 1999) and vertical integration (Newhouse and Mills, 1999; Byrne and Walmus, 1999). The other interesting parallel between Mintzberg’s (1989) machine organisation configuration and the reviewed literature was the concept of machine organisations becoming the instruments of individuals or small groups of external influencers who come to dominate them. In Australia the publicly funded healthcare bureaucracies are
EJIM 8,2
214
instruments of the various Federal and State governments of the day. These bureaucracies are responsible to ministers of the various governments for the application of healthcare policy and the regulation of healthcare delivery (New South Wales Health Department, 1999; New South Wales Health Council, 2000). The minister of government also appoints and removes the senior manager in each healthcare bureaucracy. It seemed then that the public policy and regulatory environment within which palliative care organisations operated in Australia was governed by bureaucracies that were configured and behaved, to a large extent, as Mintzberg’s (1989) machine organisations. While this was not a direct concern of the research, it provided some understanding of the context for palliative care within healthcare generally. It also indicated that on the palliative care side of the interface with healthcare bureaucracies there may need to be a unit, a section, or an area that operated as something of a machine organisation in order to translate policy, performance measurement targets and results and funding demands or requests between the two. Therefore, it was determined that the suggested fit between the literatures would be modified as in Table II. Figure 2 reflects the modified suggested configuration. Characteristics of the suggested configuration At the end of the literature reviews a suggested configuration was developed. As can be seen in Figure 2, this was a hybrid configuration. A set of characteristics was then drawn from the hybrid configuration. Based in the literature reviewed, it appeared that one could expect to see an organisational configuration where specialists and professionals with high levels of skill and knowledge, who have undertaken long
Table II. Modified suggested fit
Figure 2. Modified suggested configuration
Configuration
Fit
Degree
Entrepreneurial Machine Professional Diversified Innovative Missionary Political
No No Yes No Yes Yes Yes
Nil Minimal Moderate Nil Substantial Substantial Minimal
periods of training prior to working in palliative care, were employed. Work would often be complex. Within this structure staff would be grouped functionally for convenience but allocated to multidisciplinary teams, sometimes at short notice, for particular situations or projects. It would be expected that the great majority of work tasks required collaborative effort and a primary coordinator of that effort would be, at times, informal communication between staff members on teams. It was also expected that members of this organisation would have a requirement to sustain levels of skill and knowledge using ongoing training within disciplines or other specialist or professional groups as well as to transfer knowledge and information between disciplines, teams and individuals. Further, it appeared that decision-making autonomy would accompany professionals to the multidisciplinary teams, that authority would often be sourced in professional experience and that senior managers could commonly be found working in the multidisciplinary teams. Finally, it appeared that the composition of multidisciplinary teams would be dependent on the situation that the team must address and would change as the situation required. A section of this suggested structure would be quite different because it would be the section that interfaced with the healthcare bureaucracies and regulators. In this section it was expected that there would be evidence of specialised routine tasks using formal communications and centralised decision making, with a propensity to act to stabilise the environment when possible or necessary. Within the palliative care organisation it was expected that there would be a broadly based singular focus on the purpose of the organisation, expressed as the organisation’s mission. The existence of this focus would be used as tool for indoctrination of new staff and, at times, as a coordinating mechanism for work tasks. With regard to politics, the theoretical framework suggested that a palliative care organisation would be a highly politicised organisation operating in a politicised environment. It seemed then that a group of configuration characteristics could be sought within palliative care organisations. These configuration characteristics are listed below. (1) Specialists and professionals with high levels of skill and knowledge, who had undertaken long periods of training prior to working in palliative care, would be employed. (2) Work would often be complex. (3) Staff would be grouped functionally for administrative purposes but allocated to multidisciplinary teams, sometimes at short notice, for particular situations or projects. (4) The great majority of work tasks would require collaborative effort. (5) A primary coordinator of collaborative effort would be informal communication between staff members on teams. (6) Professionals would have a requirement to sustain levels of skill and knowledge using ongoing training within disciplines or other specialist or professional groups as well as to transfer knowledge and information between disciplines, teams and individuals.
Configured for innovation
215
EJIM 8,2
216
(7) Decision-making autonomy would accompany professionals to the multidisciplinary teams and authority would often be sourced in professional experience. (8) Senior managers could be found working in the multidisciplinary patient care teams. (9) There would be a broadly based singular focus on the purpose of the organisation, expressed as the organisation’s mission. The existence of this focus would be used as tool for indoctrination of new staff and, at times, as a coordinating mechanism for work tasks. (10) The organisation would be politicised and operating in a politicised environment. (11) A section of the organisation would be structured and operate differently because it would be the section that interfaced with the healthcare bureaucracies and regulators.
Testing the characteristics of the suggested configuration Following here are the results of testing the suggested characteristics using semi-structured interviews as noted in the Methodology section of this paper. At times, excerpts from the interviews are used to highlight points. (1) Specialists and professionals with high levels of skill and knowledge, who had undertaken long periods of training prior to working in palliative care, would be employed This was common to all case study organisations. Members of disciplines involved in multidisciplinary patient care teams commonly train for a number of years before entering palliative care. The following periods were reported: Doctors train for at least 7 years after completion of their medical degree. Since the 1980s nurses have had a 3-year university degree and commonly do not come to palliative care until they have matured in the profession. Social workers, physiotherapists and occupational therapists undertake a 3- or 4-year university degree and, again, generally practice in other places until maturing in the profession prior to working in palliative care. It was noted that sometimes an individual nurse or allied health worker, for example occupational therapy or physiotherapy, would come to palliative care work earlier in their careers than the majority would. A senior doctor noted, “Most people tend to come into palliative medicine having done specialist training but not everybody. We’ve got one staff specialist who worked as a General Practitioner (GP) for a number of years and then developed an interest in palliative medicine then went back and did specialist training.” When asked to explain an average timeline for this there were some differences between disciplines. For example, with doctors, “Well, generally you can’t start being a registrar until your third or fourth year after you start as a junior doctor, the specialist training is usually another three years on top of that, so you should have done about seven years after you’ve finished your medical degree.” With regard to nurses, “Nursing is very different. It depends on when they did their nursing training. Any of the senior nurses will have done their training in a hospital
setting where they started off working on the wards from day one. Anyone who studied nursing after about the mid to late eighties will have done it as part of a University course. They’ll come out for clinical placements and then go back to uni much like junior doctors do. And the nursing course is three years at University. . . .lots of people who are nursing in palliative care have done a number of other things and it leads into palliative care. A lot of people do oncology and haematology acute nursing and just kind of see a deficit and want to do palliative care. . . .nursing staff are usually much more senior by the time they arrive here.” For allied health, for example social work, physiotherapy and spiritual care, “. . .it’s something of a specialist area but they can come into it early. Their uni course is generally three or four years. But again, it kind of depends on their own interests, what placements they’ve done as they’ve trained. We’ve got a couple of young occupational therapists (OTs) and they’re terrific.” Generally though, “they spend a number of years somewhere else before they come to palliative care.” (2) Work would often be complex This was common, again, to all case study organisations. Palliative care, as noted at the beginning of this paper, is complicated by the number of potential drivers of distress in each patient, and patient-based carers, and by the fact that the manifestation of the symptoms of distress may not have an immediately obvious relationship to the cause or causes. Two examples from the interviews help describe this complexity. The first, dealing with perhaps a subtle complexity in the management of the teams, is a senior doctor answering a question about who actually controls a multidisciplinary patient care team at any given time: “. . .the ideal, and the way it’s spoken about is that each member of the team has equal voice. At the end of the day, though, if there’s a problem or a difficulty or a complaint, if anything goes wrong, whether we like it or not it’s the medical consultant who carries the can. So I guess from my point of view the challenge is to have that, not to take over from the actual working of the team. That’s the tension that I have as a medical consultant and as the director of the service is that I carry the can but to have each member of the team feel that they have equal say in what happens to this patient. I’m not quite sure whether the other members of the team see it like that but I do. It’s a tension for me.” The second example involves a more straightforward and open situation of deciding which discipline to allocate to a patient care team: “Well, some of it is based on the information, say when the patient’s first admitted, depending on where they’ve come from, if they’ve come in from the community team we would already have a certain amount of information. Like, this patient’s come in because of poorly controlled pain, this patient’s been having falls, they’re not managing at home. You know, we have some information so we might say look this patient’s really coming in primarily for physio and mobilisation and we’re not aware of any issues regarding psychosocial care that might need social work and pastoral care. And then we generally tend to have, everybody that’s admitted has a pastoral care interview and assessment to make sure we’re not missing anything. And then it goes on from there. If you discover that what someone really needs is not physio but someone to sort out equipment then they might need an OT (occupational therapist). Or if what’s really at the basis of their falls or not taking their medication is loneliness or sadness and you work out what people need. The main formal forum for that, ‘cause a lot of it happens informally and on every
Configured for innovation
217
EJIM 8,2
218
ward there’s kind of a patient care book where everybody writes messages to everyone else like, you know, ‘Social work please see.’ or ‘Physio please see.’. Then we also have this weekly interdisciplinary team meeting, the IDM, where there’s a representative of every discipline that looks after patient care and we discuss each patient individually.” (3) Staff would be grouped functionally for administrative purposes but allocated to multidisciplinary teams, sometimes at short notice, for particular situations or projects Staff in the care delivery operation of each case study organisation are grouped functionally (by discipline). The head of each discipline is responsible for that discipline’s contribution to the care delivery process. The disciplines noted were medicine, nursing, physiotherapy, occupational therapy, social work, spiritual care and grief counselling. In each case study there was a management team, consisting of the heads of the disciplines, that was responsible for maintenance of multidisciplinary operations. The composition of multidisciplinary patient care teams was mandated in part and situational in part. Each patient had two disciplines permanently allocated: medicine and nursing. In addition, the allocation of disciplines to a patient was described as completely dependent upon the patient’s situation at any given time. With regard to medicine and nursing it was said that nursing was in the permanent foreground, with regard to the patient, and medicine was in the background except for two occasions; a formally scheduled consultation, generally on a daily basis, and a crisis situation, which could occur at any time. In the latter, the presence of the medical component of the multidisciplinary patient care team, in front of the patient, was noted as driven by the patient’s situation. Comments from the senior medical staff interviews regarding the grouping of staff: “Well, the staff are actually grouped together in their own disciplines. So, for example, there’s the medical staff and the nursing staff and there’s allied health and each of the. . . There’s occupational therapy, and they have their own manager and also physiotherapy have their’s and so on.” and “It’s broadly medical, then nursing then allied health. Three major divisions, functionally. Now I suppose there might be other divisions like executive management and clinical staff for instance. But for me, working clinically, it tends to fall into those divisions of nursing and medical and allied.” Regarding the allocation of staff to multidisciplinary teams: “It’s very much dependent on the patient themselves. Um, the resources are automatically there but not every patient needs every resource. Each patient is discussed at a weekly multidisciplinary meeting and from that it is obvious to see which patient needs what type of multidisciplinary service. By and large everyone will have, on the wards, medical and nursing input and then depending on the patient’s needs, social work, pastoral care, physiotherapy, occupational therapy, diversional therapy type of assistance.” and “It’s based on the patient. On what the individual patient’s needs are at that particular time and what could change. We actually meet, pretty much if not daily then every second day we’ll (ward team) come together in the morning and see what issues are coming up for that patient. And we have a specific meeting once a week where we all meet together in a much more formal way and discuss the issues that have come up for a patient. And then we’ll decide whether everybody needs to be
involved or just the OT or the physio or just the social worker. It usually works out that everybody does have some involvement at some time in a patient’s stay here. Obviously, at various times there’ll be more involvement than at others.” (4) The great majority of work tasks would require collaborative effort This characteristic was described as existing and in use in each case study organisation. Work tasks were described as necessarily collaborative because of the need to attend the whole of the range of drivers of the patient’s and patient-based carers’ situations. This was described as accomplished through formal weekly multidisciplinary meetings and frequent informal communications. These are noted in the statements from interviews above. (5) A primary coordinator of collaborative effort would be, at times, informal communication between staff members on teams The existence and use of this characteristic was acknowledged in the interviews conducted in all case study organisations regarding organisational capabilities, levers and individual behaviours. These interviews contained a number of references to the frequency of informal communication and its use as a driver of collaboration in multidisciplinary patient care teams. Informal communications were often described as integral with collaborative practices and the frequency of this type of communication was also described as resulting from two imperatives: the need to communicate changes in a patient’s situation as soon as possible and the need to communicate observations made across discipline boundaries. There is a large effort put into communicating about patients and patient situations, “We all talk amongst ourselves. I mean we’ll sit down and talk about the troubles that a patient might be having at home. Is there something that can be done? Would this benefit the patient? Do you think that if you saw them this would help? So that’s how we all talk together about these sorts of things.” [Nurse]. One member of a multidisciplinary ward team talked about observing “issues that might relate to another professional so that I could give that person an idea that they were needed. They have particular specialist skills and knowledge. We all have the overview”. (6) Professionals would have a requirement to sustain levels of skill and knowledge using ongoing training within disciplines or other specialist or professional groups as well as to transfer knowledge and information between disciplines, teams and individuals This characteristic was common to the case studies. All professionals undertook ongoing professional development training within their discipline. As well, weekly formal multidisciplinary team meetings were used to transfer information and knowledge between the disciplines, as were shift changes, and more frequent informal meetings occurred for the same purpose. These are mentioned in interview statements above. Each of the case studies specifically and consciously attempted to recruit professionals that they viewed as “learners”, although it was acknowledged that they were not always successful. For example, in each case study organisation there was a standard interview question that sought to ascertain what, if any, studies a prospective professional employee was undertaking over and above training required for normal
Configured for innovation
219
EJIM 8,2
220
progression in the particular discipline. It was stated that this was an indicator of willingness to learn and openness to collaboration. In an interview with a multidisciplinary management team in one case study organisation a doctor noted that, “There’s a lot of nurses, you know, who are really experienced nurses, who don’t want to go off and do courses. But on a day to day nature they’re often the ones who quiz me most, you know, ‘Why did you do that, tell me about it, tell me about it’. So, just commenting on the nurses, I don’t think there’s anyone who’s not keen to learn, in that sense.” At the same interview a Nursing Unit Manager responded, “I think that’s what really is encouraged here is that, I learn something every day from the doctors, you know, just sitting in that hand over, I’ll learn something new every day. You know what I mean? Like, you get the opportunity to learn something new because the doctors are approachable and they will tell you and they will teach you. Whereas, I’ve worked in organisations where that doesn’t happen so much either.”
(7) Decision-making autonomy would accompany professionals to the multidisciplinary teams and authority would often be sourced in professional experience This was the case in each case study organisation. In interviews with multidisciplinary teams it was noted that professionals who might be expected to rank at or near the top of a clinical hierarchy in an acute hospital were willing to defer to the experience of other disciplines, depending on the patient’s situation. Two examples of this were given. The first involved a doctor new to palliative care deferring some decision making to nurses or allied health workers who had long service in palliative care when the situation involved an assessment of the causes of distress that originally manifested themselves as a pain management problem. The second involved deferring a part of the decision-making process, information gathering from a patient, to another discipline or perhaps a non-clinician who was particularly trusted by the patient or patient-based carers, perhaps because they shared a common first language. However, when senior palliative care professionals in each case study organisation were interviewed they noted that the final responsibility for all decisions made ended with the doctor. This being the case each senior professional explained that there was a permanent level of tension in decision making and its results because of the need to sometimes defer as described above.
(8) Senior managers could commonly be found working in the multidisciplinary patient care teams That senior managers worked in the multidisciplinary patient care teams was common in all case studies, although the level of involvement with the teams differed between the case studies from frequent to sometimes depending on other roles undertaken by particular senior managers. For example, in two of the case studies the senior social worker also worked at a local acute hospital and was senior in the discipline there also. So time and availability became issues. In another example the senior doctor in one case study organisation also spent time instructing medical students and doctors in acute hospitals in palliative care practices. At times, then, it was not common for senior managers to work in multidisciplinary patient care teams.
(9) There would be a broadly based singular focus on the purpose of the organisation, expressed as the organisation’s mission. The existence of this focus would be used as tool for indoctrination of new staff and, at times, as a coordinating mechanism for work tasks This characteristic was described as existing and in use in one case study organisation. In two of the three case study organisations it was noted that the organisation’s Mission statement played little or no part in the common understanding and ethos of the organisation, although it was given a role in indoctrination. However, it was noted that this did not affect the ethos and shared purpose found within these organisations. In the third case study it was noted that the Mission statement played a large part in establishing and maintaining the ethos and that there was a group of volunteer staff that presented to various groups on achievement against the Mission. (10) The organisation would be politicised and operating in a politicised environment The characteristic was said to exist within each case study. It was stated that the multidisciplinary patient care teams displayed similar interpersonal and discipline-based conflicts to any other team that the senior professionals interviewed had experienced anywhere else in healthcare. It was noted that this was, at times, regardless of common focus or goals. In the interviews with multidisciplinary teams, two in each case study, regarding individual behaviours the issue of conflict within the teams was acknowledged under the heading of managing ambivalence. The common solution stated was face-to-face communication as soon as possible. The operating environment of the case study organisations was highly politicised for two primary reasons: (1) the environment was created by State owned and operated healthcare bureaucracies and healthcare in Australia is a political issue; and (2) the euthanasia debate that arose from time to time invariably brought palliative care into the spotlight for at least part of the debate. A comment from a nurse, “I mean I haven’t had a day this week where I haven’t had a problem with staffing. You know, they’ve identified a problem between each other, how they’re feeling on a particular day. Certainly that’s been every day this week. I think with nursing we don’t work in an ideal situation all the time, especially now with staffing. So that does impact a lot on nurses and working with agency staff. Um, so yes, um problems do arise. And you can always tell when you’re having a cycle of not having full wards staff and working a lot of agency staff. You can see the impact on you’re regular staff and how they’re coping with it and yes it can affect them quite a lot sometimes emotionally and psychologically.” With regard to the teams, one senior medical staff member noted, “People either not listening to someone else’s point of view or having a completely different view of what’s going on will cause issues. Now, people are too polite to kind of take each other on and get angry and annoyed with each other. That kind of thing tends to go on outside the meeting rather than in the meeting. The meetings tend to manage to stay focused and get through what’s important to the patient. But you know we do have some discussion if there’s a discrepancy of management. And I guess one of the other things is trying to make sure that every one is getting the opportunity to have their say. Like, some people have got a lot to say, some people are really quiet and you try
Configured for innovation
221
EJIM 8,2
222
and be inclusive and make sure that they know that their opinion is valued and that you want to hear it.” (11) A section of the organisation would be structured and operate differently because it would be the section that interfaced with the healthcare bureaucracies and regulators It was noted that each case study had more than one regulator. Commonly, there was the State Department of Health, then the owning organisation. As well, each case study organisation is and must be accredited. Regulators were described as having requirements based generally on quantitative data. Data provided to the standards certifying authority was described as a mixture of quantitative and qualitative. As to the management of these interfaces, it was described by the case study organisations as conducted by a group that stood away from the multidisciplinary teams and patient care. Discussion and conclusions It appears then that, with some small qualifications, the results of the interviews confirm that the case study palliative care organisations are configured primarily as innovative organisations that maintain their focus with a fundamental ethos that is recognised and shared by management and staff. These organisations are professionally based, contain a level of politics and, in a small part, reflect the regulators and State owned bureaucracies to which they report and from which a majority of their funding is derived. The description of the characteristics of configuration provides two pictures. The first is a picture, almost an overview, of the configuration of the organisation’s internal and external working environments. This is comprised of the following characteristics. (2) Work would often be complex. (4) The great majority of work tasks would require collaborative effort. (5) A primary coordinator of collaborative effort would be, at times, informal communication between staff members on teams. (10) The organisation would be politicised and operating in a politicised environment. The second is a picture of the configuration of resources to suit those environments. (1) Specialists and professionals with high levels of skill and knowledge, who had undertaken long periods of training prior to working in palliative care, would be employed. (3) Staff would be grouped functionally for administrative purposes but allocated to multidisciplinary teams, sometimes at short notice, for particular situations or projects. (6) Professionals would have a requirement to sustain levels of skill and knowledge using ongoing training within disciplines or other specialist or professional groups as well as to transfer knowledge and information between disciplines, teams and individuals. (7) Decision-making autonomy would accompany professionals to the multidisciplinary teams and authority would often be sourced in professional experience. (8) Senior managers could commonly be found working in the multidisciplinary patient care teams.
(9) There would be a broadly based singular focus on the purpose of the organisation, expressed as the organisation’s mission. The existence of this focus would be used as tool for indoctrination of new staff and, at times, as a coordinating mechanism for work tasks. (11) A section of the organisation would be structured and operate differently because it would be the section that interfaced with the healthcare bureaucracies and regulators. An organisation using multidisciplinary teams and operating in an uncertain, complex and dynamic environment could be said to require the characteristics listed here if innovation is an important facet of operations. However, palliative care is a unique environment that exists on the social fringe; generally a place that few people want to contemplate until they have to and perhaps not even then. There is no commercial imperative here, no persistent drive to keep costs as low as possible, no competition as it would be described in a commercial context. The interesting question raised is one about how the suggested configuration and characteristics would appear in less unique environments where economic and financial pressures are greater and can have a greater effect on teams and individuals and the return expected for innovation is competitive or commercial advantage. Work on this question has already begun with the testing of these characteristics in a cancer hospital (Terra da Silva and Davison, 2005). This is an interim step on the way to the commercial world and will be followed by further work in acute hospitals while appropriate commercial case study organisations are found. As for the Australian research, a picture of the management of innovative practices in multidisciplinary patient care teams in palliative care is developing and a number of interlinked components are appearing. The first is an understanding of the configuration of the external and internal working environments followed by an appropriate configuration of resources, resulting in an organisational configuration focused on innovation. The development of such a configuration has been described here. Second is the availability of an appropriate set of organisational capabilities that ensure an ability to manage innovative care delivery. Third is a set of organisational levers that is capable of influencing the internal environment and the characteristic behaviours of care providers. Fourth is a set of enabled behaviours that are capable of delivering innovative practices. What becomes apparent though is the primary need to understand the characteristics of configuration as a foundation for other components. This understanding positions the observer and practitioners to understand the how and why of the other components and the environments within which they must operate.
References Ahmed, P.K. (1998), “Culture and climate for innovation”, European Journal of Innovation Management, Vol. 1 No. 1, pp. 30-43. Barbato, M. (1999), “Palliative care in the 21st century – sink or swim”, Newsletter of the New South Wales Society of Palliative Medicine, May. Bottorff, J.J., Steele, R., Davies, B. and Garossino, C. et al., (1998), “Striving for balance: palliative care patients’ experiences of making everyday choices”, Journal of Palliative Care, Vol. 14 No. 1, pp. 7-17.
Configured for innovation
223
EJIM 8,2
224
Burns, T. (1963), “‘Mechanistic and Organismic Structures’, from ‘Industry in a new Age’”, New Society, pp. 17-30, January, in Pugh, D.S. (Ed.) (1997), Organization Theory – Selected Readings, Penguin, Harmondsworth. Burns, T. and Stalker, G.M. (1979), The Management of Innovation, Tavistock Publications, London. Byrne, M.M. and Walmus, A.C. (1999), “Incentives for vertical integration in healthcare: the effect of reimbursement systems/practitioner response”, Journal of Healthcare Management, Vol. 44 No. 1, pp. 34-46. Cowling, A., Newman, K. and Leigh, S. (1999), “Developing a competency framework to support training in evidence-based healthcare”, International Journal of Healthcare Quality Assurance, Vol. 12 No. 4, pp. 149-60. Davison, G. (2003), “Organisational levers to enable innovation in palliative care”, paper presented to 3rd Hospital of the Future Conference, Warwick Business School, United Kingdom, 7-9 September,. Davison, G. (2004), “Managing knowledge on the run: using temporary communication infrastructures for managing knowledge in the complex, dynamic and innovative environment of palliative care”, in Wickramasinghe, N., Gupta, J.N.D. and Sharma, S.K. (Eds), Creating Knowledge Based Health Care Organisations, Idea Group Publishing, Hershy, PA. Davison, G. and Hyland, P. (2003), “Palliative care: an environment that promotes continuous improvement”, in Geisler, E., Krabbendam, K. and Schuring, R. (Eds), Technology, Healthcare, and Management in the Hospital of the Future, Praeger Publishers, Westport, CT. Davison, G. and Sloan, T. (2002), “Palliative care teams and individual behaviours”, Team Performance Management Journal, Vol. 9 No. 3, pp. 69-77. Drucker, P.F. (1985), “The discipline of innovation”, Harvard Business Review, May-June, pp. 67-72. Duberly, J.P. and Burns, N.D. (1993), “Organizational configurations – implications for the human resource/personnel management debate”, Personnel Review, Vol. 22 No. 4, pp. 26-34. Firth-Cozens, J. (1999), “Clinical governance development needs in health service staff”, British Journal of Clinical Governance, Vol. 4 No. 4, pp. 128-34. Forchuk, C. and Dorsay, J.P. (1995), “Hildegard Peplau meets family systems nursing: innovation in theory-based practice”, Journal of Advanced Nursing, Vol. 21 No. 1, pp. 110-5. Fottler, M.D. (1996), “The role and impact of multiskilled health practitioners in the health services industry”, Hospital & Health Services Administration, Vol. 41 No. 1, pp. 55-75. Glouberman, S. and Mintzberg, H. (2001a), “Managing the care of health and the cure of disease – Part 1: Differentiation”, Healthcare Management Review, pp. 56-69, Winter. Glouberman, S. and Mintzberg, H. (2001b), “Managing the care of health and the cure of disease – Part II: Integration”, Healthcare Management Review, Winter, pp. 70-84. Heller, B.R., Oros, M.T. and Durney-Crowley, J. (2000), “The future of nursing education: 10 trends to watch”, Nursing & Healthcare Perspectives, Vol. 21 No. 1, pp. 9-13. Henderson, M.D. (1995), “Operations management in healthcare”, Journal of Healthcare Finance, Vol. 21 No. 3, pp. 44-7. Henkelman, W.J. and Dalinis, P.M. (1998), “A protocol for palliative care measures”, Nursing Management, Vol. 29 No. 1, pp. 40-6. Higginson, I.J. (1999), “Evidence based palliative care”, British Medical Journal, Vol. 319 No. 7208, pp. 462-3.
Kearney, M. (1992), “Palliative medicine – just another speciality?”, Palliative Medicine, Vol. 6, pp. 39-46. Lawrence, P.R. and Lorsch, J.W. (1986), Organization and Environment – Managing Differentiation and Integration, Harvard Business School Press, Boston, MA.
Configured for innovation
Lazarus, R.S. and Folkman, S. (1984), Stress, Appraisal and Coping, Springer, New York, NY. Lewis, M., Pearson, V., Corcoran-Perry, S. and Narayan, S. (1997), “Decision making by elderly patients with cancer and their caregivers”, Cancer Nursing, Vol. 20 No. 6, pp. 389-97. McConnell, C.R. (1996), “The evolving role of the healthcare supervisor: shifting paradigms, changing perceptions, and other traps”, Healthcare Supervisor, Vol. 15 No. 1, pp. 1-11. McDonald, K. and Krauser, J. (1996), “Toward the provision of effective palliative care in Ontario”, in Latimer, E. (Ed.), Excerpts from OMA Colloquium on Care of the Dying Patient,. McGrath, P. (1998), “A spiritual response to the challenge of routinization: a dialogue of discourses in a Buddhist-initiated hospice”, Qualitative Health Research, Vol. 8 No. 6, pp. 801-12. Meyer, A.D., Tsui, A.S. and Hinings, C.R. (1993), “Configurational approaches to organization analysis”, Academy of Management Journal, Vol. 36 No. 6, pp. 1175-95. Meyers, J.C. (1997), “The pharmacist’s role in palliative care and chronic pain management”, Drug Topics, Vol. 141 No. 1, pp. 98-107. Miller, D. (1987), “The genesis of configuration”, Academy of Management Review, Vol. 12 No. 4, pp. 686-701. Miller, D. (1999), “Notes on the study of configuration”, Management International Review, Vol. 39, pp. 27-39. Miller, D. and Friesen, P.H. (1984), Organizations – A Quantum View, Prentice Hall, Englewood Cliffs, NJ. Mintzberg, H. (1989), Mintzberg on Management, The Free Press, New York, NY. Mintzberg, H. (1997), “Toward healthier hospitals”, Healthcare Management Review, Vol. 22 No. 4, pp. 9-18. Mintzberg, H., Jorgensen, J., Dougherty, D. and Westley, F. (1996), “Some surprising things about collaboration – knowing how people connect makes it work better”, Organizational Dynamics, Vol. 25 No. 1, pp. 60-71. Moskowitz, D.B. (1999), “The trouble with medical innovation”, Business & Health, Vol. 17 No. 5, pp. 38-42. Moss Kanter, R. (1984), The Change Masters Innovation and Entrepreneurship in the American Corporation, Simon & Schuster, New York, NY. Newhouse, R.P. and Mills, M.E. (1999), “Vertical systems integration”, Journal of Advanced Nursing, Vol. 29 No. 10, pp. 22-9. New South Wales Health Council (2000), Report of the NSW Health Council A Better Health System for New South Wales, New South Wales Department of Health, 03/2000. New South Wales Health Department (1999), “Framework for managing the quality of health services in New South Wales”, State Health Publication No: (HPA) 990024,. Nystro¨m, H. (1980), Creativity and Innovation, Wiley, New York, NY. Ostroff, C. and Schmitt, N. (1993), “Configurations of organizational effectiveness and efficiency”, Academy of Management Journal, Vol. 36 No. 6, pp. 1345-61.
225
EJIM 8,2
226
Pierce, S. (1999), “Allowing and assisting patients to die: the perspectives of oncology practitioners”, Journal of Advanced Nursing, Vol. 30 No. 3, pp. 616-22. Rasmussen, B.H. and Sandman, P.O. (1998), “How patients spend their time in a hospice and in an oncological unit”, Journal of Advanced Nursing, Vol. 28 No. 4, pp. 818-28. Rose, K. (1995), “Palliative care: the nurse’s role”, Nursing Standard, Vol. 10 No. 11, pp. 38-44. Rose, K. (1997), “How informal carers cope with terminal cancer”, Nursing Standard, Vol. 30 No. 11, pp. 39-42. Rose, K. (1999), “A qualitative analysis of the information needs of informal carers of terminally ill cancer patients”, Journal of Clinical Nursing, Vol. 8 No. 1, pp. 81-8. Scherer, F.M. (1984), Innovation and Growth, Schumpeterian Perspectives, MIT Press, Cambridge, MA. Schumpeter, J.A. (1934), The Theory of Economic Development, trans. Redvers Opie, Cambridge, MA, pp. 74-94, quoted in Scherer, F.M. (1984), Innovation and Growth, Schumpeterian Perspectives, MIT Press, Cambridge, MA. Terra da Silva, M. and Davison, G. (2005), “Relating configuration and learning in Brazil and Australia”, Journal of Health Organization and Management, forthcoming. Tolson, D. (1999), “Practice innovation: a methodological maze”, Journal of Advanced Nursing, Vol. 30 No. 2, pp. 381-90. Wickramasinghe, N. and Davison, G. (2004), “Making explicit the implicit knowledge assets in healthcare: the case of multidisciplinary teams in care and cure environments”, Health Care Management Science, Vol. 7 No. 3, refereed journal article. Witt Sherman, D. (1999), “Training advanced practice palliative care nurses”, Generations, Vol. 23 No. 1, pp. 90-7. Wright, J., Smith, M.L. and Jackson, D.R.H. (1999), “Opinion clinical governance: principles into practice”, Journal of Management in Medicine, Vol. 13 No. 6, pp. 457-65. Wyke, A. (1994), “Hippocrates’s dilemma”, Economist, Vol. 330 No. 7855, pp. SS17-SS18, Mar 19. Zaltman, G., Duncan, R. and Holbek, J. (1973), Innovations and Organizations, Wiley, New York, NY. Further reading O’Connell, J.J. (1968), Managing Organizational Innovation, Richard D. Irwin, Inc., Homewood, IL.
The Emerald Research Register for this journal is available at www.emeraldinsight.com/researchregister
The current issue and full text archive of this journal is available at www.emeraldinsight.com/1460-1060.htm
Immobility of tacit knowledge and the displacement of the locus of innovation Ali Yakhlef
Immobility of tacit knowledge
227
Stockholm University School of Business, Kra¨friket, Stockholm, Sweden Abstract Purpose – The paper seeks to identify the drivers behind the displacement of the locus of innovation from a hierarchical model to a distributed environment including customers, (lead) users, intermediaries and other external stakeholders. Design/methodology/approach – Exploring the corporate, technological and contextual transformations the paper combines ideas from innovation (taking Rothwell’s 1994 fifth-generation innovation process as a point of departure) with knowledge management theories. Findings – Immobility of tacit knowledge – a prerequisite for innovation – is a crucial factor behind the increasing disintegration of the R&D function. Innovation-related activities will tend to be allocated between companies and other external sources (customers and users, etc.) depending on the location of tacit knowledge underlying them. Increasingly, customers are taking over more and more of firms’ innovation-related activities because of the high costs of importing to the R&D department the tacit knowledge underlying them. Firms, in their turn, will retain those (manufacturing) activities of which they possess experience-based knowledge. Research limitations/implications – The present research is explorative in nature; thus, the propositions tentatively developed are in need of further elaboration and empirical investigation. Practical implications – To the extent that innovation is displaced into distributed environments, one of the crucial implications for organisations is how to build the necessary competencies to effectively exploit, coordinate and streamline knowledge flows from different sources and turn them into new ideas and innovations. Originality/value – The value of the paper is its extension of Rothwell’s Fifth generation innovation process model. Keywords Customers, Knowledge management, Innovation Paper type Research paper
1. Introduction Recent research in organisational theory has featured two competing paradigms that guide the way in which organisations manage their innovation and coordinate their research and development (R&D) activities. One school of thought contends that the organisation of the R&D department is the target of increased centralisation (Martin and Harris, 2000), while the other maintains that the same function is being increasingly fragmented, decentralised, flexible, non-hierarchical or contracted out to various partners and customers (Whittington, 1990; Thomke and von Hippel, 2002; Chesbrough, 2003). More specifically, studies have examined the disaggregation of many large, integrated, hierarchical organisations into loosely coupled production arrangements, such as contract manufacturing, alternative work arrangements and strategic alliances (Ashkenas et al., 1995; Schilling and Steensma, 1999; Snow et al., 1992). At the same
European Journal of Innovation Management Vol. 8 No. 2, 2005 pp. 227-239 q Emerald Group Publishing Limited 1460-1060 DOI 10.1108/14601060510594684
EJIM 8,2
228
time, researchers have also observed that organisations are moving toward increasing integration (e.g. in the banking and health care sectors). Chesbrough (2003) contrasts the vertically integrated innovation model of Lucent Technologies, which is perhaps the premier industrial research organisation, with Cisco Systems, which lacks anything resembling the internal R&D capabilities of the former. Despite this, Cisco has consistently managed to stay abreast of Lucent’s development and even occasionally beating it in the market. While Lucent Technologies attempted to harness its internal resources and tightly controlled its innovation processes, Cisco in-sourced from the outside whatever technologies it needed, usually by building alliances and accessing lead users and customers’ ideas. Without conducting much research of its own, Cisco has nurtured one of the “world’s finest industrial R&D organisation” (Chesbrough, 2003). Thomke and von Hippel (2002) argue that, because R&D has long been a costly and inexact process, companies are increasingly resorting to a radical approach: equipping their customers with the appropriate tools so they can design their own products and services. This means that companies are externalising their stock of in-house expertise – earned through years of experience – to their customers. The aim of this externalisation is to enable customers to design and develop the products and services that suit them more precisely. In contrast to Martin and Harris (2000), Thomke and von Hippel (2002) contend that the open approach to innovation is increasingly gaining acceptance among many companies. This, they explain, is due to a growing perception that not only are customers’ needs often complex, subtle and fast changing but also are customers themselves not in a position to “fully understand their needs until they try out prototypes to explore exactly what does, and doesn’t, work” (Thomke and von Hippel, 2002). Another school of research indicates that companies are increasingly devolving innovation-related tasks not only to customers but also to various market players, knowledge brokers and information or innovation intermediaries (Quinn, 2000; Sawhney et al., 2003). Sawhney et al. (2003) report on the rise to prominence of new market actors, whom they refer to as innomediaries (or innovation intermediaries). Innovation intermediaries bridge a gap between producers and consumers and help companies speed up their innovation processes. The shift of innovation from the hierarchical, closed model to the non-hierarchical, open approach finds its highest expression in what has come to be referred to as the “Open Source” model. The open source approach brings together thousands of people from diverse communities – programmers, translators, testers and authors from almost every region of the world, in a joint venture of developing open software. The relatively open boundaries of the community make it easy to enter and exit, thereby creating a dynamic and ever-changing community. Although open source development has mainly brought together hobbyists, its popularity is increasing and gaining currency among corporations. Companies are increasingly showing interest in the open source approach as a way to support their most critical business process – innovation (Van Wendel de Joode et al., 2002). This approach is winning a growing number of converts even among government agencies (e.g. the German Parliament), corporations and investment banks (e.g. Credit Suisse First Boston) (Osterloh and Kuster, 2002).
Does this herald the end of in-house innovation? The internal R&D department had been, until recently, a site of value-creation and a significant barrier of entry. For advocates of the hierarchical school of thought, the R&D department is crucial because much of its knowledge is tacit in nature and cumulative in its development, hence difficult to codify, replicate, regulate and translate into terms suitable for market exchange (Howells, 1997; Saviotti, 1998; Martin and Harris, 2000). Surprisingly, the non-hierarchical, open approach is also expressed in terms of the immobility of tacit knowledge. But in this case, customers’ tacit knowledge is considered the point of departure. The rationale for externalising innovation is that the form of knowledge related to product development is often difficult to capture, given that the information about what the customer wants resides in the customer and the ability to satisfy those wants and needs lies within the manufacturer (Thomke and von Hippel, 2002). While the former school emphasises R&D’s tacit knowledge as the mode of operation (from the manufacturer’s perspective), the latter takes into consideration the complexity and stickiness of knowledge concerning customers (from the consumer’s perspective). The aim of this paper is to mediate between the two streams of thought, suggesting a more complex picture of innovation in which the immobility of tacit knowledge of manufacturers and customers will be the determining factors. It is assumed that innovation-related activities will be distributed among manufacturers and consumers (or other external sources) depending on the form of knowledge underlying them. Various innovation activities tend to move to the sites where tacit knowledge is located. The paper begins with an exploration of the contextual transformations that have led to the gradual displacements of the locus of innovation from the internal R&D department to external distributed environments, taking Rothwell’s (1994) fifth-generation innovation process as the main guide. This is discussed in Section 2 of the paper. According to Rothwell (1994), the locus of innovation has shifted from the R&D department where engineers’ scientific, codified and tacit knowledge was the main source for innovation, to distributed networks of partners, customers, lead-users and other constituencies, all connected through information technologies. Much has changed since Rothwell’s work. In Section 3, I will present how innovation is gradually migrating from a company’s locus of control toward external loci where tacit knowledge resides, be it consumers, market brokers, or lead users. The main implication is that these people are in a better position to access what customers feel, need and want. This is not to suggest that manufacturers’ role will recede to the background. Manufacturers also enjoy a great deal of tacit knowledge that can be regarded as immobile (such as complex engineering skills earned through many years of experience). Finally, Section 4 discusses some implications of the interplay of tacit knowledge and the displacement of the locus of innovation from the hierarchical control of a company to distributed environments lying beyond the control of the company. The paper concludes by suggesting a sixth generation innovation process that takes place beyond the hierarchical control of companies and some ensuing implications. 2. The displacement of the locus of innovation The classical view of hierarchically integrated organisations holds that the development of a new product by a team is greatly enhanced by collocating team
Immobility of tacit knowledge
229
EJIM 8,2
230
members and allowing them to dedicate full time to the project. The team members are meant to develop skills and knowledge that are specific to the particular project, making them less replaceable by other colleagues. They are able to achieve a greater understanding and commitment to the project by becoming “experts” of the project. However, such a view has gradually come to be questioned by a number of researchers and practitioners. Rothwell (1994) suggested five innovation process generations. In each of the innovation process stages, different categories of people come into limelight as sources of knowledge relevant to innovation. For instance, the technology push characterising Rothwell’s (1994) first generation innovation process implies that innovation is based on the engineer’s technological knowledge – a particular field of expert knowledge obtained from science education at schools, colleges, universities and so on. Such a body of knowledge is codified in the form of technology theories but enriched with years of experience and tacit knowledge. In the generations that follow, however, the forms of knowledge needed to innovate began to draw on other sources as well, such as marketing knowledge, production and so on (both codified as well as tacit and practical knowledge). Without being very specific about what kind of knowledge characterises which generation, it can be said in a fair manner that, by the fourth generation, a dramatic change takes place in regard to the nature and source of knowledge required for innovation. In the early 1990s, companies began to be viewed as sites that are not only primarily concerned with producing products and services, but mainly as knowledge-generators (Nonaka, 1994; Nonaka and Takeuchi, 1995) concerned with creating and integrating knowledge bases. In this context, innovation is emphasised as a model of knowledge creation (Nonaka, 1994; Nonaka and Takeuchi, 1995). The main idea of Nonaka (1994) and Nonaka and Takeuch’s (1995) view is that knowledge creation is the outcome of the dynamic interactions between the different modes of knowledge conversions. Knowledge creation, which involves the interplay of tacit and explicit knowledge, takes place inside the company. The view of organisations as creators and hoarders of knowledge shifts the focus away from financial and material resources to intellectual resources (Lockett and Thompson, 2001). Such resources vary from other resources in many aspects. Unlike other activities, knowledge-creating activities do not have to be located in a certain place and time. Nor can they be monitored closely. Creative ideas and insights can be created both during and outside working hours. Ideas can often be used by more than one person at a time. In contrast to physical resources, ideas are non-rival in use. If one passes an idea on to somebody else, that person still possesses it. In fact, certain forms of knowledge and skills do not favour replication. Even experiments may be difficult to repeat successfully, often because knowledge is not easily alienable (it cannot be separated from the person who possesses it), implying that if knowledge is to be transferred from one place to another then we have to transfer the person who has that form of knowledge – a process which may incur forbiddingly higher costs than one would be willing to pay. Furthermore, knowledge is cumulative, i.e. it builds on previous knowledge in a sense that what we learn is often shaped by our prior knowledge. Finally, knowledge can be specific – addressing a limited number of contextual issues; or general – applied in many situations and used for solving a wide range of issues.
Assuming that knowledge is a prerequisite to innovation, what, then, is the implication of the above for product innovation activities? It would be that the locus of design and manufacturing would evolve, depending on the inalienability or immobility of relevant tacit knowledge. If the form of knowledge underlying the design of a product or a service resides to a large extent in customers, then it will tend to be migrated to them. By contrast, if a manufacturer possesses the required tacit knowledge for certain design and manufacturing activities, then such activities will remain with the manufacturer. The more companies able to codify the knowledge underlying certain activities into tools, the more outsourceable to customers or partners these will tend to be. Codification enables information and knowledge to circulate between producers and consumers. This way, codification will speed up the process of transferring explicit knowledge from consumers to companies and vice-versa. However, knowledge that resists codification remains captive to the body in which it resides and the context that it is bound to. Tacit knowledge is associated with body-centred know how skills, a large part of which is not accessible even to the “knower”, for they cannot be re-presented and described discursively (as in the case of designing skills). Such skills can, therefore, only be learned through years of experience and trial and error or imitative apprenticeship to a master. Explicit knowledge, on the contrary, can be articulated and passed over via a medium (such as documents or software tools and databases, CAD/CAM programmes, etc.) and taught to students at school. Advances in the sophistication and capacity of technologies will facilitate, and push forward, the conversion of tacit knowledge into codified knowledge, thereby leading to an increased outsourcing of the knowledge that once was regarded as a company’s crown jewel. Codified knowledge plays a secondary role when companies’ main desire is to shorten their time to market and to meet their customers’ diverse needs with more precision. 2.1 In-sourcing innovation-related customer knowledge Recently, a new understanding has begun to take shape. Instead of nurtured only in-house, innovation-related knowledge can also be in-sourced from the outside (Quinn, 2000) and combined with internal skills and expertise. The spread and subsequent ubiquity of the internet has breathed new life into the use of technology as a device to capture market information and knowledge. Companies have made use of it to facilitate interactions between them and their customers. They have, over a period of time, upgraded their technological interface with their customers, such as setting up web-based communities, to gain an insight into customers’ behaviour in computer-mediated environments (Walther, 1992; Hagel and Armstrong, 1997; Kozinets, 1999; Nambisan, 2002; Schubert and Ginsburg, 2000; Rothaermel and Sugiyama, 2001; Balasubramanian and Mahajan, 2001; Lechner and Hummel, 2002). Invaluable though this form of knowledge may be, it is still limited to its technology-mediated forms – data, information, numbers and figures – all of which need to be contextualised, interpreted and absorbed by internal resources. But these processes have proved difficult to be achieved in many companies (Sawhney and Prandelli, 2000; Nambisan, 2002). Although the reconfiguration of such information through sorting, adding, categorising, re-categorising, re-contextualising and combining with internal
Immobility of tacit knowledge
231
EJIM 8,2
232
information may lead to a generation of new ideas and knowledge, thereby uncovering explicit and latent customer needs and wants, the process still has its limitations in that it takes place many steps removed from the customers’ tacit dimension, and in abstraction from their feelings and emotions (Schubert and Ginsburg, 2000). In the past, companies have used their sales force as a channel to capture part of this sticky information and knowledge. Today, information technology media are often used for capturing and diffusing explicit knowledge and information. However, face-to-face interactions and the sharing of context and perspectives is still the preferred way to capture tacit knowledge. The increased reliance on technology as an interface with customers has led companies to realise that their knowledge of their customers is not on par with customers’ changing and fluid needs and wants.
2.2 Outsourcing innovation to intermediaries Assuming that it would be too costly to capture customer information and knowledge that is tacit, idiosyncratic, subtle or difficult to articulate, companies are resorting to more conventional approaches whenever possible. For example, they have started to outsource some of their innovation-related knowledge to the external suppliers of information, knowledge brokers and innovation intermediaries who are closer to customers in the distribution channels. In fact, innovation-related knowledge can be bought from information intermediaries, knowledge brokers and innovation mediators (Quinn, 2000; Sawhney et al., 2003) who are able to enter into a deeper and more intimate relationship with customers. The rise of knowledge intermediaries can be explained by their superior ability to organise communities of consumers, users and scientists from all over the world and to elicit from them tacit insights and contextual knowledge that producers and manufacturers are unable to obtain. Customers feel more at ease with such intermediaries, for these can be viewed as more neutral than the so-called “greedy” companies. According to Sawhney et al. (2003), by June 2002, more than 10,000 scientists from 105 countries had registered on InnoCentive, an innovation intermediary web site. Half of them are from outside the United States. Over 3,000 project rooms had been opened and 14 awards had been announced ranging from $2,000 to 75,000, while several more awards are in the pipeline. The participating scientists include retired researchers, university professors and researchers working for independent clinical research organisations and more. Although the rewards are deemed modest by US standards, they may be significant for scientists from developing countries. However, there is more to this than just financial incentives. Scientists engage in these challenges for intellectual reasons as well. Innovation intermediaries fulfil two main functions. From the “seeker” companies’ perspective, they are a cost-effective, convenient and speedy way of tapping scientific knowledge – knowledge that transcends the boundaries of organisations and nations. They allow a company to expand its R&D capacity without increasing its size and incurring supplementary costs, since all payment is contingent upon satisfactory solutions. Furthermore, as scientists participating in the challenge may be versed in different fields of expertise and may not come from one nation or one continent, the synergy of different approaches and perspectives may be a creative way to solve problems that would otherwise prove hard to solve.
The increase in the various forms of innovation intermediaries can be accounted for in terms of the increase in the mobility of knowledge as the result of the mobility of workers, who spill out ideas from companies’ R&D departments (Chesbrough, 2003). Combined with the growing availability of private venture capitalists – who have helped finance new players to commercialise ideas spilled outside the silos of corporate research labs, innovation tends to migrate towards the open model (Chesbrough, 2003). Furthermore, tight connections and intensive communications between the company and its external sources of innovative ideas through new information technologies reduce the cost of transaction and interaction. 2.3 Outsourcing innovation to customers Another approach increasingly used by companies is to outsource innovation-related tasks to customers by involving them in the new product development process. Innovation is farmed out to customers because of the immobility of their knowledge. It seems easier and less costly to push innovation to where tacit knowledge resides, rather than to attempt to extract it, bring it to the firm and use it as an input in innovation processes. Involving the customer in the innovation process would reduce the risk of failure and speed up product cycles. For example, customers of computer and electronic equipment (C&EE) are increasingly putting pressures on their suppliers to design more and more complex and sophisticated solutions to their business problems (Shepherd and Ahmed, 2000). Given that customers are not in a position to thoroughly spell out to their suppliers in explicit forms what they are seeking, they enter into various forms of partnerships, in which they work together with the aim of uncovering or defining “problems for which solutions are required” (Shepherd and Ahmed, 2000). This view suggests that neither a “push” nor a “pull” approach is called for. Instead, what is needed is an interactive and relational approach between customers and suppliers. The role of C&EE suppliers is changing from product providers to “solutions” providers. Solution-based businesses may either develop the solutions internally or in-source them from outside partners. The solution components should be “architecturally compliant, i.e. easily integrated using industry standard technology” (Shepherd and Ahmed, 2000) in order for them to be streamlined and integrated with the rest of a system’s components with minimum friction. It is a challenge for solutions providers to build the required competencies and organisational adjustments to meet the needs of customers and to establish an effective environment based on a close relationship with them – an environment that locks both sides into a mutually advantageous long-term commitment (Shepherd and Ahmed, 2000). Solutions to the problems are created jointly in a process of negotiation and conversation between the suppliers and the customers (Lundvist and Yakhlef, 2004). Farming out parts of the innovation process to customers also implies that firms have to teach and train their customers how to become, in a sense, innovators. Thomke and von Hippel (2002) argue that, since R&D has long been a costly and inexact process, some companies are now equipping their customers with the tools to design and develop their own products. These tools – called customer tools – involve a considerable amount of human expertise and knowledge (both tacit and explicit), which a firm has managed over the years to codify and incorporate into databases, such as CAD/CAM programmes. This is because knowledge related to product
Immobility of tacit knowledge
233
EJIM 8,2
234
development is often difficult to achieve, given that information about what the customer wants resides in the customer, and the knowledge to solve customers’ wants and needs (or how to satisfy them) lies with the manufacturer. Customer tools are based on gate-array technology that enables customers and users to test a design and create their own prototypes through trial and error. The tools are customer-friendly as they use Boolean algebra, which is the design language of electrical engineers. In addition, customer tools involve large libraries of pre-tested circuit modules and information and knowledge about production processes so that customers and users can test their designs to ensure that they can be manufactured. Thomke and von Hippel (2002) observe that more recent technology – chips called “field programmable gate arrays” (FPGAs) – is poised to enable the customer to become both the designer and the manufacturer. The traditional approach, according to the author, will pale in comparison to this emerging one. Among the advantages, these tools significantly improve customer satisfaction since they are better suited to address subtle, idiosyncratic aspects of customer needs – aspects that the manufacturer may not be able to know otherwise. In addition, this approach enables designs to be completed in a more timely fashion, given that customers can create them at their own site. Another advantage identified by the authors is that designs can be manufactured the first time around, reducing the time and costs associated with prototyping and testing. This is especially important when fast product turnaround is a crucial factor. As noted by Thomke and von Hippel (2002), it is a daring move to make in-house knowledge accumulated through years of experience available on a web site. This approach is gaining ground and expanding beyond B2B practices. Procter & Gamble’s initiative to shift parts of its product design to its customers is one such example. The consumer-product giant has recently changed its approach to innovation, extending its internal R&D to the outside world through the slogan “Connect & Develop” (Chesbrough, 2003). Procter & Gamble’s sites pg.com and reflect.com allow women to “brand” their own versions of makeup, perfume and other beauty-care products. It also allows P&G to start mapping what may be the next frontier of consumer-product marketing: mass customisation. Using interactive software, visitors to the site mix and match various options – colours, scents, and skin-care preferences – to create their own brand. Reflect.com even allows customers to redesign a product as many times as they want. But handing the control over product design to customers does not reduce the importance of listening to them. Rane Kline, Manager of the Concierge Service says: “We haven’t made a single change that hasn’t started from a conversation with one of our customers . . . . Every meeting, every decision is driven by what we hear from them”. When customers who paid premium prices for reflect items expressed disappointment with the weakly branded, homespun look of the company’s original packaging, Reflect rolled out a new look that combines personalisation with a strong brand identity. Let us observe in passing that other theorists have warned against listening too much to their customers, arguing that this will only lead to imitative, unimaginative solutions (Ulwick, 2002). The role of customers in idea generation has mainly been recognised in connection with incremental, continuous innovation. But with regard to radical innovation, the value that customers can bring to the idea generation process is claimed to be limited (Christensen, 1997; O’Connor, 1998). Any attempt to settle this
issue will take us too far afield, for our present concern here is to map the shift to external sources of innovation. 3. Toward a new division of innovation activities Although companies are increasingly shifting their innovation process to intermediaries and customers, this does not mean that they are outsourcing all their capabilities. What is happening is that manufacturers are keeping in-house designs of products requiring specific technical skills that are difficult to migrate to customers. Customers, on the other hand, may take over the design of those activities “that require quick turnarounds or a detailed and accurate understanding of the customer’s need” (Thomke and von Hippel, 2002). Users tend to know more than manufacturers about their particular needs and their particular usage environments. Manufacturers tend to specialise in particular types of solutions for those needs and wants. When it comes to complex activities that require manufacturing and design expertise, the R&D department will still prevail. Chesbrough (2003) emphasises that innovation in many industries – such as copiers, computers, disk drives, semiconductors, telecommunications equipment, pharmaceuticals, biotechnology, banking, insurance and consumer packaged goods and even military weapons and communications systems – is shifting from the closed to the open innovation model. It is situated at the interface between the suppliers and the customers, who are not under the direct control of the central R&D lab. Although the open innovation model is rising in prominence in response to the perceived deficiencies of the fully integrated model, it is unlikely that this model will be adopted in such an industry as the nuclear-reactor industry. This is because it relies mainly on internal and confidential ideas, has low labour mobility, involves little venture capital and few (and weak) start-ups and relatively little research conducted at universities (Chesbrough, 2003). All in all, companies will still have to perform “the difficult and arduous work necessary to convert promising research results into products and services that satisfy customers’ needs” (Chesbrough, 2003) and solutions that solve customers’ problems (Shepherd and Ahmed, 2000). The R&D department will have to learn to collaborate with external innovators and be able to harness outside ideas effectively and productively. This shift requires a number of organisational, technological, integrative and market competencies (Shepherd and Ahmed, 2000). 4. Concluding remarks and implications This paper has explored the changing sites of knowledge – a prerequisite to innovation – and the subsequent effects on the locus of innovation. Due to the cost and time of converting customers’ tacit and fluid knowledge into explicit and usable information, companies are migrating significant innovation-related tasks to their customers and users. Such attempts take place through companies’ collaborative initiatives with customers in order to access their latent needs and tacit insights, and through interfaces that transfer bodies of codified knowledge in the form of tools, offering customers the opportunity to design the products and services that meet their diverse and idiosyncratic needs and wants. Whereas a large amount of manufacturers’ engineering, technological knowledge are increasingly transferred to customers’ and users’ sites, more tacit and hard-to-codify manufacturing skills are kept in-house.
Immobility of tacit knowledge
235
EJIM 8,2
236
Companies are increasingly looking beyond the confines of their organisations for ideas and suggestions. The boundaries of the R&D department have been extended to include partners and customers, whose ideas and insights are crucial for companies’ survival. The open innovation model challenges the conventional wisdom that says the innovation context is the most effective when regarded as a private property and hierarchically controlled (Lawrence and Lorsch, 1967). This is where Rothwell’s (1994) “fifth-generation innovation process” – to a large extent based on the philosophy of self-reliance and hierarchical control of innovation that has prevailed in many companies for most of the 20th century – is to be revisited. The newly emerging distributed innovation model seems to proclaim a “sixth-generation innovation process”, which ushers companies’ R&D departments into a fundamental shift in the way they organise and bring their ideas to market. In thinking about the emerging pattern of innovation whereby more and more activities are being outsourced to external actors, it is useful to make a distinction between what Henderson and Clark (1990) calls “component knowledge” (knowledge about each of the core design concepts and the way in which they are implemented in a particular component) and “architectural knowledge” (knowledge about the ways in which the components are integrated and linked together into a coherent whole). According to the author, successful product development requires both types of knowledge: component knowledge and architectural knowledge. However, from the perspective of the present paper, it would seem that as long as companies intend to keep in-house architectural knowledge, they might rely on infomediaries and other knowledge brokers and customers for market knowledge that they are unable to access at reasonable costs and in a timely manner. Architectural knowledge requires tight connections and intense communications between the company and its external sources of innovative ideas. In response to customers’ increased role in innovation, companies will have to structure their customer interface in novel ways. This interface then becomes a crucial area to manage. Boundary-spanning managers may bridge the knowledge gap between the market and the company by ensuring the transfer of both codified and tacit knowledge through frequent and intense communications, and by providing customer support during trial-and-error iterations, which are necessary for product development. Rather than producing products and services, companies will also have to attempt to produce “producers”, i.e. customers who would extend the task of producing. Lastly, the growing popularity of the open-source model challenges companies to rethink their hierarchical, closed system of innovation. Many will probably adopt the approach taken by such companies as IBM, which recently took the bold step of placing $40 million’s worth of in-house tools for developing software into the public domain to encourage people to write programmes that run on Linux. By doing this, IBM is seeking to help make Linux a widespread standard (Thomke and von Hippel, 2002). More and more companies are combining open source elements and closed system elements (Wendel de Joode et al., 2002). With borders between producers, consumers and other actors being demolished, organisations are facing the challenge to revamp their structure and competencies so as to support knowledge capture. Shepherd and Ahmed (2000) suggest that bringing suppliers and customers to an intimate co-creative
process will put more emphasis on integrative skills and market/business knowledge and less on technical skills. The focus will shift towards the development of deeper relationships with customers and with external sources, because these relationships grant companies access to customers’ tacit insights and latent needs. As noted by Iansiti (1998) in a study of Netscape, involving beta users in product development requires new competences from staff members as well as new organisational capabilities in order to integrate different sources of input. 4.1 Limitations of the study and suggestions for future research topics Although the paper draws on a number of changes in corporate practices with regard to managing R&D activities and also on a number of observations made by theorists, its nature still remains exploratory. Further studies should focus on empirical investigations of companies’ innovation strategies and their relationship to R&D. The allocation of their resources – recruiting engineers as opposed to buying off-the-shelf ideas, and the extent to which they involve or not involve their customers in new product development – can be good indicators of change. In-depth case studies detailing how companies go about developing new products and services may provide support or the lack thereof for the observations presented in this paper. References Ashkenas, R., Ulrich, D., Jick, T. and Kerr, S. (1995), The Boundaryless Organization: Breaking The Chains of Organizational Structure, Jossey-Bass, San Francisco, CA. Balasubramanian, S. and Mahajan, V. (2001), “The economic leverage of the virtual community”, International Journal of Electronic Commerce, Vol. 5 No. 3, pp. 103-38. Chesbrough, H.W. (2003), “The era of open innovation”, MIT Sloan Management Review, Vol. 44 No. 3. Christensen, C.M. (1997), The Innovator’s Dilemma, Harvard Business School Press, Boston, MA. Hagel, J. and Armstrong, A. (1997), Net Gain: Expanding Markets Through Virtual Communities, Harvard Business School Press, Boston, MA. Henderson, R.M. and Clark, K.B. (1990), “Architectural innovation: the reconfiguration of existing product technologies and the failure of established firms”, Administrative Science Quarterly, Vol. 35 No. 1, pp. 9-22. Howells, J. (1997), “A socio-cognitive model of innovation”, Social Policy, Vol. 25, pp. 883-94. Iansiti, M. (1998), Technology Integration: Making Critical Choices in a Dynamic World, Harvard Business School, Boston, MA. Kozinets, R. (1999), “E-tribalized marketing? The strategic implications of virtual communities of consumption”, European Management Journal, Vol. 17 No. 3, pp. 252-64. Lawrence, P.R. and Lorsch, J.W. (1967), Organization and Environment: Managing Differentiation and Integration, Irwin, Homewood, IL. Lechner, U. and Hummel, J. (2002), “Business models and system architectures of virtual communities: from a sociological phenomenon to peer-to-peer architectures”, International Journal of Electronic Commerce, Vol. 6 No. 3, pp. 41-53. Lockett, A. and Thompson, S. (2001), “The resource-based view and economics”, Journal of Management. Lundkvist, A. and Yakhlef, A. (2004), “Customer involvement in new service development: a conversational approach”, Managing Service Quality, Vol. 14 Nos. 2/3.
Immobility of tacit knowledge
237
EJIM 8,2
238
Martin, R. and Harris, M. (2000), “Decentralization, integration, and the post-bureaucratic organization: the case of R&D”, Journal of Management Studies, Vol. 34 No. 4, pp. 563-85. Nambisan, S. (2002), “Designing virtual customer environments for new product development: towards a theory”, Academy of Management Review, Vol. 27 No. 3, pp. 392-412. Nonaka, I. (1994), “A dynamic theory of organizational knowledge creation”, Organization Science, Vol. 5 No. 1, pp. 14-37. Nonaka, I. and Takeuchi, H. (1995), The Knowledge Creating Company, Oxford University Press, New York, NY. O’Connor, G.C. (1998), “Market learning and radical innovation: a cross case comparison of eight radical innovation projects”, Product Innovation Management, Vol. 15, pp. 151-66. Osterloh, M.R. and Kuster, B. (2002), “Open source software production: climbing on the shoulders of giants”, Open Source Research, available at: http://opensource.mit.edu/ online_papers.php?&orderby ¼ authors Quinn, J.B. (2000), “Outsourcing innovation: the new engine of growth”, Sloan Management Review,. Rothaermel, F.T. and Sugiyama, S. (2001), “Virtual internet communities and commercial success: individual and community-level theory grounded in the atypical case of TimeZone.com”, Journal of Management, Vol. 27 No. 3, pp. 297-312. Rothwell, R. (1994), “Towards the fifth-generation innovation process”, International Marketing Review, Vol. 11 No. 1, pp. 7-31. Saviotti, P.P. (1998), “On the dynamics of appropriability of tacit and codified knowledge”, Research Policy, Vol. 26 Nos. 7/8, pp. 845-56. Sawhney, M. and Prandelli, E. (2000), “Communities of creation: managing distributed innovation in turbulent markets”, California Management Review, Vol. 42 No. 4, pp. 24-54. Sawhney, M., Prandelli, E. and Verona, G. (2003), “The power of innomediation”, MIT Sloan Management Review, pp. 77-82. Schilling, M. and Steensma, K. (1999), “Technological change, globalization, and the adoption of modular organizational forms”, working paper, Boston University, Boston, MA. Schubert, P. and Ginsburg, M. (2000), “Virtual communities of transaction: the role of personalisation in electronic commerce”, Electronic Markets, Vol. 10 No. 1, pp. 45-55. Shepherd, C. and Ahmed, P.K. (2000), “From product innovation to solution innovation: a new paradigm for competitive advantage”, European Journal of Innovation Management, Vol. 3 No. 2, pp. 100-6. Snow, C., Miles, R. and Coleman, H.J. (1992), “Managing 21st century network organizations”, Organizational Dynamics, Vol. 20 No. 3, pp. 5-20. Thomke, St and von Hippel, E. (2002), “Customers as innovators: a new way to create value”, Harvard Business Review. Ulwick, A.W. (2002), “Turn customer input into innovation”, Harvard Business Review, pp. 91-7. Van Wendel de Joode, R., de Bruijn, J.A. and Eeten, M.van (2002), “Protecting the virtual commons: self-organizing communities and innovative intellectual property rights regimes”, Open Source Research, available at: http://opensource.mit.edu/online_papers. php?&orderby ¼ authors Walther, J.B. (1992), “Interpersonal effects in computer-mediated interaction”, Communication Research, Vol. 19, pp. 52-90.
Whittington, R. (1990), “The changing structure of R&D: from centralisation to fragmentation”, in Loveridge, R. and Pitt, M. (Eds), The Strategic Management of Technological Change, Wiley, Chichester, pp. 183-204.
Immobility of tacit knowledge
Further reading Miles, R.E. and Snow, C.C. (1986), “Organizations: new concepts for new forms”, California Management Review, Vol. 28 No. 3, pp. 62-73. Von Hippel, E. (2001), “Innovation by user communities: learning from open software”, Sloan Management Review, Vol. 42 No. 4, pp. 82-6.
239
The Emerald Research Register for this journal is available at www.emeraldinsight.com/researchregister
The current issue and full text archive of this journal is available at www.emeraldinsight.com/1460-1060.htm
EJIM 8,2
DiffuNET: The impact of network structure on diffusion of innovation
240
Ben Shaw-Ching Liu College of Business Administration, Butler University, Indianapolis, Indiana, USA
Ravindranath Madhavan Katz Graduate School of Business, University of Pittsburgh, Pittsburgh, Pennsylvania, USA, and
D. Sudharshan University of Kentucky, Lexington, Kentucky, USA Abstract
European Journal of Innovation Management Vol. 8 No. 2, 2005 pp. 240-262 q Emerald Group Publishing Limited 1460-1060 DOI 10.1108/14601060510594701
Purpose – To provide an explicit model to address the relationships between the structural characteristics of a network and the diffusion of innovations through it. Further, based on the above relationships, this research tries to provide a way to infer diffusion curve parameters (innovation coefficient and imitation coefficient) from network structure (e.g. centralization). Design/methodology/approach – Based on the network and innovation literatures, we develop a model explicitly relating the structural properties of the network to its innovation and imitation potential, and in turn to the observed diffusion parameters (innovation and imitation coefficients). We first employ current theoretical and empirical results to develop postulates linking six key network properties to innovation and imitation outcomes, and then seek to model their effects in an integrative manner. We argue that the innovation and imitation potentials of a network may be increased by strategically re-designing the underlying network structure. We validated the model by searching the published empirical literature for available published data on network properties and innovation and imitation coefficients. Findings – We validated the model by searching the published empirical literature for available published data on network properties and innovation and imitation coefficients. The results reported from various relevant research papers support our model. Practical implications – This research shows that the innovation and imitation potentials of a network may be increased by strategically re-designing the underlying network structure; hence, provide guidelines for new product managers to enhance the performance of innovative products by re-design the underlying network structure. Originality/value – The model developed in this paper is a breaking through result of synthesizing various traditions of diffusion research, ranging from anthropology and economics to marketing which were developed independently. The research explicitly modeled the diffusion process in terms of the underlying network structure of the relevant population allowing managers and researchers to directly link the diffusion parameters to the structural properties of the network. By doing so, it added value by making it possible to infer diffusion potential from directly measurable network properties. Vis-a`-vis the network diffusion literature in particular, we added value by “unpacking” the diffusion
The authors acknowledge helpful comments and suggestions from Dawn Iacobucci, Randall Sandone, Thomas Valente and Marketing Proseminar participants at the University of Illinois, Urbana-Champaign.
process into innovation and imitation processes that form the building blocks of contagion. Moreover, we developed a holistic structural model of network diffusion which integrates the several network properties that have hitherto been studied separately. Keywords Product management, Modelling, Innovation, Forecasting, Marketing Paper type Research paper
Introduction As remarked by Rogers (2004a), there is an understandably natural interest by marketing researchers and professionals in the diffusion of product innovation because diffusion is essentially the marketing of new products. The interest by the marketing field in new product diffusion began around 1960 and peaked in the 1980s followed by gradual level off after then. Spurred by the current global context of rapid technological change and continuous innovation, researchers in several marketing-related fields have shown renewed interest in modeling the diffusion of innovations – e.g. Ganesh et al. (1997) and Mahajan et al. (1990) in new product marketing; Rogers (2003) and Valente and Rogers (1995) in communications research; Singhal and Rogers (2003) and Wolfeiler (1998) in health management; and Kelley and Brooks (1991) and Bretschneider and Bozeman (1986) in technology management. Such a trend is especially obvious when networking of the society is greatly influenced by the use of internet (Rogers, 2004a; Fenech and O’Cass, 2001). Within the various traditions of diffusion research, ranging from anthropology and economics to marketing, the main focus has been on tracing the spread of an innovation through a system in time and/or space (Rogers, 2003). It is acknowledged in the literature that diffusion is the process by which an innovation is communicated through certain channels over time among members of a social system (Rogers, 2004a), diffusion of innovation within a social group is fundamentally a process of social communication (Mahajan et al., 1990), the patterns of which are inextricably linked to the social structure of the group (Burt, 1987). However, the link between social structure and diffusion parameters remains largely unexplored, in the sense that most current diffusion models simply have an unexpanded “parameter” that acknowledges the impact of word-of-mouth communication (Iacobucci, 1996; Midgley et al., 1992). Against this background, we propose that explicitly modeling the diffusion process in terms of the underlying network structure of the relevant population will allow us to directly link the diffusion parameters to the structural properties of the network. Drawing on the rich diffusion literature in structural sociology and related disciplines (Rogers, 2004b; Burt, 1987; Rogers and Kincaid, 1981), we address the following research question: What are the relationships between the structural characteristics of a network and the diffusion of innovations through it? Further, given the above relationships, how can we infer diffusion curve parameters (innovation coefficient and imitation coefficient) from network structure (e.g. centralization)? There are three key areas in which we seek to contribute to the marketing literature on diffusion and networks. Vis-a-vis the diffusion literature in general, we seek to add value by making it possible to infer diffusion potential from directly measurable network properties. Vis-a-vis the network diffusion literature in particular, we first seek to add value by “unpacking” the diffusion process into innovation and imitation processes that form the building blocks of contagion. Next, we seek to develop a holistic structural model of network diffusion which integrates the several network
The impact of network structure 241
EJIM 8,2
242
properties that have hitherto been studied separately. Consistent with these three goals, we make the following arguments: . We posit that individual actors in a social network are each associated with an innovation potential and an imitation potential (i.e. each individual has a potential for innovative and imitative behaviors, which needs to be realized through marketing action), and that these constructs are fundamentally related to the network structure. We also argue that the innovation and imitation potentials of the individuals belonging to a network regulate the diffusion of an innovation through it. . We posit that each network has associated with it an innovation potential as well as an imitation potential. . Since the network is composed of individuals who are connected to one another, the network-level innovation and imitation potentials reflect an aggregation of individual potentials. . Drawing on existing network literature to motivate the postulates, we model the respective relationships between innovation and imitation potentials and the relevant network properties. . We derive the relationship between the innovation and imitation potentials of the network and the innovation and imitation coefficients of the standard diffusion model (Bass, 1969), respectively. . In essence, we conceive of the estimated innovation and imitation coefficients of a standard diffusion model as realized innovation and imitation potentials. The argument is that traditional marketing action (e.g. pricing, advertising, etc.) plays a critical role in realizing the innovation and imitation potential. Such traditional marketing action determines how much of the potential is actually “converted”; however, the potential itself may be increased by modifying the network structure. Thus, our formulation focuses attention on a different set of marketing objectives that are aimed at “re-engineering” the network in order to increase innovation and imitation potentials. The rest of the paper begins with a brief review of the relevant literature, and then goes on to develop the model as outlined above. (Figure 1 provides an overview of the conceptual model.) We conclude with some directional analyses of the model and a discussion of implications for research and practice (Figure 1). Network models of the diffusion of innovation Networks have always been implied, often without elaboration, in the diffusion literature: the diffusion of innovations through a social system has usually been studied as a process of communication flow between connected partners (Rogers, 2003; Iacobucci, 1996). Diffusion researchers employing the network perspective have sought to explicate the actual structure of relationships that shape and constrain the communication, thus throwing further light on the diffusion process. The core idea in the network tradition is that social structure influences the spread of new ideas and practices by shaping patterns of interaction within the network – e.g. who talks to whom (Burt, 1987). Since adopting an innovation is risky – in terms of deviating from conventional norms of behavior – actors (individuals, firms, or other units of analysis)
The impact of network structure 243
Figure 1. Conceptual framework linking network structure to diffusion effects
tend to “model” their behavior on that of other actors. The fundamental intuition of the network theory of diffusion is that structural patterns determine whom a given actor will choose as a “model”. While networks are composed of relationships between a set of actors, there are two broad approaches to the study of how relationships influence diffusion: relational and structural models of diffusion (Valente, 1995). Relational models consider the focal actor’s adoption or non-adoption in light of the behavior of those to whom the former is directly connected. Thus, for a given actor, direct contact with an influential “opinion leader” might be seen as impelling adoption. Structural models, in contrast, consider all relations in the network, rather than only the direct ties that a given actor may have. Founded on the key assumptions of structural sociology and network analysis (Wellman, 1988), structural network models acknowledge that the overall structure of the network, as well as a given actor’s position in it, influence that actor’s behavior and subsequent performance. For example, a given actor may adopt an innovation because of its prior adoption by a highly central and visible actor, even though the two actors may have no direct contact with each other. In modeling the effect of the overall network structure on diffusion, we adhere to the structural model. The history of network models of diffusion may be traced (Valente, 1995) from opinion leadership formulations (Coleman et al., 1966), to the strength of weak ties formulation (Granovetter, 1973), to the communication network formulation (Rogers and Kincaid, 1981) and finally to the structural equivalence formulation (Burt, 1987). Network analysts refer to the specific process of innovation diffusion as contagion; thus, the chief concern of network models of diffusion is the variety of network mechanisms through which contagion operates (Burt, 1987). In developing our
EJIM 8,2
244
postulates and mathematical model, we draw upon and expand the core ideas in this literature. The following key conclusions of the existing network research on the diffusion of innovations serve as background to our model development. These nine conclusions have been clustered into actor-level and network-level groups, depending on the relevant unit of analysis. Actor-level (with primary reference to the position of individual actor in the network). . Innovativeness is positively associated with the actor’s prominence in the network (a crude measure of which is the number of an actor’s contacts), which may be viewed as indicative of opinion leadership (Rogers, 2003) or, in a related manner, as a measure of how well integrated the actor is (Coleman et al., 1966). . Highly central players are more likely to be early adopters of advantageous innovations, while peripheral players are more likely to adopt riskier innovations (Rogers, 2003; Becker, 1970; Burkhardt and Brass, 1990; Madhavan et al., 1998). Potential adopters who are highly central tend to have higher reputations that they are less willing to risk by adopting unproven or contra-normative innovations; peripheral players have less at stake and may be more willing to take such risks (Rogers, 2003; Abrahamson and Rosenkopf, 1997). . Isolates, i.e. actors who are not connected to anybody else, tend to show considerably later adoption times (Rogers and Kincaid, 1981). . Weak ties, i.e. actors that serve as bridges between unconnected groups, are important links in the diffusion process (Granovetter, 1973; Burt, 1992). . Innovativeness is positively associated with structural centrality, i.e. how significant a position the actor has in the network. For example, betweenness centrality measures the degree to which an actor lies between other actors (corresponding to potential control), while closeness centrality measures the degree to which an actor is close to others (corresponding to potential access). Actors who are highly central in these respects are more likely to receive innovation-related information and influence early, and hence more likely to adopt early (Burkhardt and Brass, 1990). Network-level (with primary reference to overall patterns of relationships). . Highly centralized networks (with a small number of highly central actors) should demonstrate a higher rate of diffusion; once adopted by the central actors, the innovation will spread rapidly through the network (Valente, 1995). . Diffusion will be more rapid in networks that are densely interconnected (Black, 1966). . Contagion operates through cohesive ties, i.e. through strong connections with close contacts (Coleman et al., 1966). . An alternative hypothesis to contagion through cohesion is that it operates through structural equivalence, i.e. actors may take their cues from others that they consider to be similar to themselves, even in the absence of direct ties between them (Burt, 1987).
Against the background provided by the current network literature on the diffusion of innovation, we develop postulates and a mathematical model relating structural properties and the innovation and imitation coefficients. The relationship between diffusion parameters and network structure variables Consistent with the diffusion literature, we model the diffusion process in terms of the innovation and imitation potential of the individual and the network. The structural properties proposed to influence innovation potential are centrality, constraint, and range. The structural properties proposed to influence imitation potential are centralization, density, and embeddedness. Based on the extensive theoretical and empirical support available in the research literature, we take as axiomatic the individual effects of these network properties on innovation and imitation potential[1]. In other words, our goal in this paper is not to develop further theoretical or empirical arguments in support of each causal link, but rather to develop a parsimonious mathematical model that integrates their effects holistically. Network structure and innovation potential Centrality is a key property of the individual actor within a network, and is a structural measure of the importance of a given player in its network (Freeman, 1979). In general, an actor is highly central in its network if it has a large number of connections with other actors, or if it occupies a position of strategic significance in the overall structure of the network (Scott, 1991). Centrality derives from being the object of relations from other actors, implying that the central actor is “in demand” as a relationship partner (Burt, 1991). Drawing on the logic of resource dependency (Pfeffer and Salancik, 1978), i.e. that organizational interaction arises because organizations seek access to critical resources, it may be argued that centrality indicates the extent of potential resources available to an actor. Thus, an actor who is “in demand” as a partner has access to a large stock of resources through its various contacts. Extensive theoretical and empirical support is available for the argument that the highly central actor is in a good position to innovate. There are three causal mechanisms underlying this argument. First, there is a resource-based argument: if centrality is taken as a proxy for the quantity of critical resources available to an actor (Galaskiewicz, 1979), it may be argued that highly central actors are more likely to have “slack” resources which foster experimentation (Nohria and Gulati, 1996) and facilitates innovation (Rogers, 2003). Second, there is an information-based argument: Innovation is more likely to take place in a rich and complex information environment, as individuals and firms are exposed to a wide variety of cues that stimulate innovation – e.g. lead users (von Hippel, 1988) or sophisticated suppliers (Porter, 1990). A highly central actor is at the confluence of a large number of information sources (each contact can be viewed as one), and is thus well positioned to innovate. Further, the highly central actor is more likely to receive innovation-related information and influence earlier than less central actors in the same network (Rogers, 2003). Third, there is a status-based argument: the highly central player is unlikely to imitate widespread practices that are already in use by the “followers”. Rather, the former will either innovate or imitate other highly central peers. (Imitating a small number of high-visibility elites should be tantamount to innovation.) Especially where the
The impact of network structure 245
EJIM 8,2
246
innovation has social prestige attached to its adoption, late adoption may reduce the social value of the innovation (Rogers, 2003). This might work in the opposite direction as well, in the sense that an imitator may not be sought out by others as much as an innovator would be. This argument is consistent with the postulate that early adopters must continue to make judicious innovation decisions in order to maintain a central position in the communication structure (Rogers, 2003). Extensive empirical support is available for the argument that centrality is positively related to innovation potential. Rogers and Kincaid (1981, p. 228) report on several early studies which show that connectedness – a concept “very similar, if not identical” to centrality (Rogers and Kincaid, 1981, p. 178n) – is positively related to innovativeness. Valente’s (1995, p. 54) re-analysis of three separate data sets showed that structural centrality is associated with innovativeness. Ibarra (1993, p. 492) found that network centrality is a strong determinant of individual involvement in administrative innovation. Based on evidence illustrated by the above-mentioned studies, and combining the resource-based, information-based and status-based arguments, thus, the first postulate is that P1.
The network centrality of an individual node will be positively related to its innovation potential.
Constraint, drawn from Burt (1983b) recent work on structural holes, is another key structural property of the individual actor. The constraint image may be summarized as follows (Krackhardt, 1995): Assume that A has a relationship with B and C. A is in a better position to profit from the relationships if B and C are not connected to each other. When B and C are connected only through A, a structural hole exists between them, which can be exploited by A. A’s advantage is built upon three factors. First, A obtains information separately and with minimum redundancy from both B and C. Second, A has the opportunity to control B and C by “playing them off against each other”. Finally, A can simply arbitrage resources between B and C – e.g. buying from B and selling to C at a premium. All of this is possible only if A has exclusive relations with B and C, and the latter have no substitute for A – i.e. there is a structural hole between B and C. On the other hand, if B and C are connected to each other in some other way as well – either directly or through another actor – A’s advantage begins to disappear. The absence of a structural hole between B and C poses a constraint on A. It has been empirically demonstrated that constraint is negatively related to performance in a variety of contexts, such as industry returns and managerial career progress (Burt, 1992). Building on the above, we argue that constraint will have a negative influence on an actor’s propensity to innovate. If an actor has a network rich in structural holes, its contacts are unconnected with each other. This makes the network both efficient and effective in information terms, as it ensures that redundancy in information sources is eliminated (Burt, 1992). Thus, for the same level of network activity, the actor whose network is rich in structural holes will gain more varied information. In contrast, the actor whose network is poor in structural holes is at a disadvantage, since its partners – being connected to each other – will “recycle” redundant information to it. Since innovation and new knowledge arises at the interfaces of existing knowledge domains (Simon, 1985; Granovetter, 1973) and are accentuated in a rich and complex information environment, it may be argued that a network rich in structural holes will
spur innovation. In contrast, the absence of structural holes, termed constraint, should be a negative influence on the actor’s propensity to innovate. From an empirical standpoint, the constraint argument – although a recent theoretical development – has garnered extensive support. Burt (1998) has demonstrated constraint effects across five study populations. The beneficial effects of structural holes have been found to influence outcomes ranging from industry profits (Burt, 1992), career progress (Podolny and Baron, 1997), annual bonus (Burt, 1997), and finding better jobs (Granovetter, 1974). Ahuja (1998, p. 27) has specifically demonstrated that networks rich in structural holes are associated with higher innovative output. Thus, the second postulate is that P2.
The network constraint of an individual node will be negatively related to its innovation potential.
Network range is the third structural property of the individual actor that is proposed to influence innovation. Network range is defined as the extent to which an actor’s ties link it with diverse others (Burt, 1983a). For example, if an individual’s friendship network is limited to others that belong only to one ethnic or social group, that individual has low network range. On the other hand, a friendship network comprising others of a wide variety of ethnic and social groups has high network range. Network range implies that the actor is connected to partners that are dissimilar from itself and each other. The actor with a higher network range will therefore have access to more diverse resources. In addition, this will mean that the actor has an efficient-effective network, in that the resources it has access to are not duplicated or redundant (Burt, 1992). Rogers and Kincaid (1981, p. 244) develop Epstein’s (1961) finding that networks characterized by greater heterophily and diversity are informationally rich. Burt (1983b, p. 169) found that large firms, with the highest network range, also tended to have multiplex directorate ties and access to the influence of diverse economic sectors on their boards. Madhavan’ (1996) data from the steel industry showed that range was positively associated with flexibility. Building on these empirical findings and on the insights of the psychological literature on creativity referred to above, it may then be argued that the higher the network range, the higher ought to be the innovation potential. Thus, the third postulate is that P3.
The network range of an individual node will be positively related to its innovation potential.
Network structure and imitation potential Imitation is a social phenomenon that takes place within the context of a social network to which both imitators and the “imitatee” belong. Thus, consistent with the basic argument of this research, it may be proposed that network structure influences imitation patterns. Network density is the first network property of interest, and refers to the proportion of links present relative to those possible (Marsden, 1990). A dense network is characterized by a large number of links being present among the actors. For example, a network in which actors have direct relations with most other actors is a high-density network. In contrast, if actors have a limited number of direct relations, we have a low-density network.
The impact of network structure 247
EJIM 8,2
248
Our model suggests that network density will be positively related to the imitation potential. There are three main arguments for the proposed effect of network density on imitation. First, there is a communication argument: high network density indicates high levels of communication in the network, increasing the likelihood that actors will be exposed to news and influence about the innovation sooner rather than later. Second, there is the information argument: since densely connected actors are likely to have access to the same information (Granovetter, 1973), the scope for “information variation” and subsequent innovation will be limited. Thus, imitation, rather than innovation, will become the dominant mode. Third, there is the socialization argument: high density networks function as “cliques” creating strong behavioral pressures to conform – leading to imitation – rather than to adopt new practices – which would lead to innovation (Kraatz, 1998). Research from several different fields supports the above reasoning. In the field of epidemiology, studies have shown that, for diseases with same infectiousness, high density networks are more likely to experience epidemics than lower density networks (Bailey, 1975). Similarly, Krassa (1988) argued that dominant opinions become more widespread quickly in more integrated communities. Valente’s (1995, p. 42) re-analysis of three classic data sets showed that network density is indeed associated with faster diffusion. Based on computer simulations, Abrahamson and Rosenkopf (1997, p. 298) argued that “the greater the network density, the greater the number of adopters”. Similar conclusions may be inferred from Morris (1981) and Mizruchi (1992), Strang and Soule (1998, p. 273). Combining the communication, information and socialization arguments, then, it may be proposed that P4.
Network density will be positively related to the imitation potential of the network.
Centralization is the next network property proposed to influence imitation, and refers to the variability in centrality scores among actors (Marsden, 1990). A network with a few highly central actors and many actors with low centrality is a highly centralized network; a less centralized network will have a more equitable distribution of centrality scores. Reiterating the resource-based, information-based, and status-based logic behind Postulate 1, highly central actors should be innovators in general, and less central actors should be imitators. If this is the case, a highly centralized network – with relatively few highly central actors – should demonstrate a high imitation potential. Moreover, once the innovation is adopted by the central actors in a centralized network, it diffuses rapidly to the rest of the less central actors, facilitating imitation on their part (Valente, 1995). This postulate builds on the empirical support already reported for centrality. Moreover, Valente’s (1995, p. 54) re-analysis showed that advantageous innovations diffuse more rapidly in highly centralized networks. Thus: P5.
Centralization will be positively related to the imitation potential of the network.
Embeddedness is the final structural property of the individual actor proposed to influence imitation. According to Granovetter (1985), embeddedness refers to the fact that economic action and outcomes are affected by actors’ pairwise relations (relational embeddedness) and by the structure of the overall network of relations
(structural embeddedness). An actor’s level of embeddedness refers to the extent to which its behavior is affected by its relationships with its partner(s). Generally speaking, the stronger the relationship with a partner, the higher is the actor’s commitment to the relationship, and the more likely is the relationship to be a factor in its decisions. Based on finely-detailed ethnographic and quantitative data, Uzzi (1997) has argued that embeddedness increases each party’s commitment to exceed the letter of the contract, and to contribute to the relationship. For example, an individual may be willing to pay a higher “price” – emotionally or otherwise – to sustain a strong friendship than she would to sustain a casual acquaintance. By the same token, embeddedness also facilitated fine-grained information transfer and complex adaptation to environmental changes (Uzzi, 1997). Uzzi’s (1997) data suggest that the width of information search decreases with the number of embedded ties, while the depth of information search increases with the strength of the embedded ties – a claim that has significant implications for the tendency to imitate. The link between embeddedness and imitation also stems from the fact that strong relationships will be associated with strong behavioral pressures to conform – because of the desire to keep the relationship going by living up to expectations, as well as to avoid jeopardizing it through non-conforming behavior. Embeddedness, as a measure of the actors’ commitment to the network, may thus influence how high the behavioral pressures to conform are. As argued earlier, behavioral pressures to conform should lead to imitation rather than to innovation. Thus, the sixth postulate is that P6.
Embeddedness will be positively related to the imitation potential of the network.
As indicated earlier, the above six postulates are also graphically presented in Figure 1. Model formulation Based on the postulates described above, we can now formulate the relationship between the six network structure variables and the two network potential parameters. The two diffusion curve coefficients are then, respectively, derived in terms of the network innovation and imitation potential parameters, INP and IMP. Individual and network parameters As indicated earlier, innovation potential and imitation potential may be conceptualized at the level of the individual as well as of the network. While we acknowledge that innovative and imitative actions originate at the individual level, innovation potential and imitation potential at the network level are the key inputs to marketing strategy formulation – e.g. the overall strategy may be different for networks with high innovation potential than for networks with low innovation potential. Thus, the following model employs individual-level innovation and imitation potentials as the basis on which to estimate network-level innovation and imitation potentials (Iacobucci and Hopkins, 1992). Equations (1)-(5b) represent the analytical process by which we (1) aggregate individual-level innovation and imitation potentials into network-level potentials, and then (2) relate the network-level potentials to the innovation and imitation coefficients familiar in the innovation literature.
The impact of network structure 249
EJIM 8,2
Based on postulates 1 to 3, the innovation potential of an individual i (INPi) may be expressed as: INPi ¼ bp0 þ bp1 x1i þ bp2 x2i þ bp3 x3i þ 1i
250
ð1Þ
where x1i is centrality of individual i in the network, x2i is constraint of individual i in the network, x3i is range of individual i in the network, bpk is regression coefficients to be estimated, 1i is random error. Defining the innovation potential of a network as the innovation potential of an individual chosen at random, the innovation potential of the network (INP) will be: INP ¼ EðINPi Þ where EðINP i Þ ¼ bp0 þ bp1 x1 þ bp2 x2 þ bp3 x3
ð2Þ
where INP is the innovation potential of the network, E(INPi) is the expected value of the innovation potential of any individual i, x1 ¼ Eðx1i Þ is the expected value of the centrality of individuals in the network, x2 ¼ Eðx2i Þ is the expected value of the constraint of individuals in the network, x3 ¼ Eðx3i Þ is the expected value of the range of individuals in the network. While individuals are associated with a potential to innovate due to their structural position, whether or not this potential is actually realized depends on the marketing effort (product, price, promotion, place) applied. The realized innovativeness is indicated in the diffusion of innovation literature by the time at which an individual adopts. We term the realized innovativeness of individuals as their innovation propensities. (In this view, innovators will have higher innovation propensities than those who belong to the early majority. Early majority members will have higher innovation propensities than late majority members, and laggards will have the lowest innovation propensities.) Thus, the time at which individuals adopt an innovation is dependent on their respective innovation propensities. Innovation propensities, by definition, can at most be the respective individual innovation potential. The difference between innovation propensity and innovation potential both at the individual and network levels is accounted for by the marketing effect. Thus, marketing effort combined with the individual network characteristics of centrality, constraint, and range characteristics leads to the realization of a specific innovation propensity by an individual member of a network. The bounded nature of this relationship is consistent with common empirical observation in the marketing literature (Lilien et al., 1992, p. 475). Formally, in general, Pi is modeled as P i ¼ fðINPi ; MKTÞ
ð3aÞ
where Pi is individual i’s innovation propensity, INPi is individual i’s innovation potential (as previously defined), MKT is a summary measure of the marketing effort used. Following Lilien et al. (1992), we specify the more specific, yet generally applicable model between Pi and MKT as P i ¼ ai ð1 2 e2b i MKT Þ þ ci
ð3bÞ
with ai þ ci ¼ INPi as a constraint
ð3cÞ
Recall that INPi is the maximum value that Pi can attain at maximal marketing effort. Earlier, we defined the innovation potential of the network as the innovation potential of an individual chosen at random. We also assumed that under normal conditions, the innovation potential of a network could be estimated as the expected value of the innovation potentials of its members. Therefore, the innovation coefficient of a network may be written as p ¼ EðP i Þ ¼ að1 2 e2bMKT Þ þ c
ð3dÞ
where b ¼ Eðbi Þ
and a þ c ¼ Eðai þ ci Þ ¼ EðINPi Þ ¼ INP
ð3eÞ
From equation (3d), b may be interpreted as the innovation response elasticity of the network to marketing effort. Similarly, the relationship between the imitation potential of the network (IMP) and the other three network structural variables (density, centralization, and embeddedness) can be modeled based on postulates 4, 5, and 6. Since these three network structural variables are aggregate network properties, IMP can only be measured/identified at the aggregate level; therefore, the imitation potential of the network is modeled directly at the network level as follows: IMP ¼ bq0 þ bq4 x4 þ bq5 x5 þ bq6 x6 þ s
ð4Þ
where IMP is imitation potential of the network, x4 is the density of the network, x5 is the centralization of the network, x6 is the embeddedness of the network, bqk is the regression coefficients to be estimated, s is random error. Again, at the network level, the realized imitativeness depends on the marketing effort deployed and is constrained by the innate imitation potential of the network. As will be recalled, the imitation potential of a network depends on its density, centralization, and embeddedness. Symmetric to the way innovation was modeled, the relationship between imitation propensity (q) and imitation potential (IMP) is modeled as q ¼ sð1 2 e2nMKT Þ þ u
ð5aÞ
s þ u ¼ IMP
ð5bÞ
where
That is, the imitation coefficient of a network varies exponentially with marketing effort (MKT) and has a maximum value of the innovation potential (INP) of the network. The coefficient n may be interpreted as the imitation response elasticity of the network to marketing effort. Recall that the relationships between these two network potential variables (INP, IMP) and the six network structure variables ðx1 ; x2 ; x3 ; x4 ; x5 ; x6 Þ are formulated in equations (2) and (3).
The impact of network structure 251
EJIM 8,2
252
Model analysis The relationships derived above are useful for both predictive and normative purposes. First, the diffusion curve for an innovation can be predicted given knowledge of the network structure variables. Given a set of current network structure variables, x1o, x2o, x3o, x4o, x5o, x6o, the innovation potential and imitation potential of the current network, INPo and IMPo, can be calculated via equations (2) and (4). In turn, the innovation coefficient and imitation coefficient, po and qo, can be calculated via equations (3d), (3e), (5a) and (5b) for any level of marketing effort. On the other hand, if a marketer wants to achieve a certain pattern of diffusion for an innovative product, then the following analysis may be helpful. Two cases are detailed below. Case A deals with situations when the analyst sets targets for the diffusion coefficients directly. Case B deals with situations when the analyst sets targets directly for peak sales and the time to peak sales. The latter is more likely to be found in managerial practice. Case A Let the target diffusion curve parameters chosen be pt and qt, respectively. Then the corresponding values of the target network potential variables, INPt and IMPt, can be calculated using equation sets- (3) and (5), respectively for a given planned marketing effort. Using equations (2) and (4), there are infinite sets (x1, x2. . .x6) that lead to INPt and IMPt because the system of equations is under-identified. The system can be made to yield a unique solution by imposing additional constraints on it. Since changing a network structure would involve additional costs, cost minimization imposes additional constraints on the system. Let wpk and wqk be the cost of a unit change in the respective network structure variables (xk), then the problem can now be defined as finding the values of the network structure variable that will lead to the target potential variable values with minimum cost of changing the network structure variables. Let Xp be the column vector 2
x1
3
6 7 6 x2 7 4 5 x3 and Xq be the column vector 2
x4
3
6 7 6 x5 7: 4 5 x6 Therefore, the problem can be defined as one of finding Xpt and Xqt such that the distances between Xpt and Xpo, and between Xqt and Xqo are, respectively, minimized. The solutions are: X ptk ¼ X pok þ ½ðINPt 2 INPo Þ=bp ·bp bpk =w2pk
ð6Þ
X qtk ¼ X qok þ ½ðIMPt 2 IMPo Þ=bq ·bq bqk =w2qk
ð7Þ
where “·” represents the dot product operation of vectors; “o” represents original/current status; “t” represents target status; “k” ¼ 1; 2; 3; 4; 5; 6: Case B If target peak sales quantity (Qt*) and target peak sales timing (Tt*) are known and the target innovation coefficient ( pt) and target imitation coefficient (qt) are not known, then pt and qt can be obtained by solving the following set of equations (see Lilien et al., 1992, p. 471): Tt* ¼
1 qt ln pt þ qt pt
ð8Þ
ðpt þ qt Þ2 4qt
ð9Þ
Qt * ¼ Qm
where Qm is the estimated market potential. Then, the values of Xp and Xq can be arrived at as in Case A. Discussion Our goal in this paper has been to develop an analytical framework characterizing the diffusion of innovation through a network as a function of its structural properties. Drawing upon the rich literature on the diffusion of innovation and on networks (Rogers, 2003; Burt, 1987, 1992), we developed a model of the diffusion curve parameters as functions of the six structural properties, centrality, constraint, range, density, centralization, and embeddedness. In this final section, we discuss issues related to the validity of our approach, and sketch its contributions and implications. Validity issues Given that our goal in this paper was to develop a mathematical model of the relationship between network structure and innovation and imitation coefficients, a full-fledged empirical demonstration of the model is a task we leave to future research. However, in the hope of providing a test of “face validity”, we provide below a proxy-based argument that builds on the well-accepted dimensions of culture (Hofstede, 1980). The ideal way to establish the validity of the model would be to have data on network properties and on innovation and imitation coefficients for different networks, so that the proposed relationship between network properties and the innovation and imitation coefficients may be tested. In the absence of such data, however, our goal here is only to show that the model is conceptually sound, and that further data collection and testing are worthwhile – the actual collection of primary data and its testing are beyond the scope of this paper. We began the process by searching the published empirical literature for available published data on network properties and innovation and imitation coefficients. Given the amount of research in marketing on innovation diffusion, it was not difficult to find studies that have compared p and q across different societies. Two of the best examples of such studies were Takada and Jain (1991) and Helsen et al. (1993) – e.g. the latter reported p and q for Color TV sets across Europe, Japan, and the US. However, we were
The impact of network structure 253
EJIM 8,2
254
unable to locate any usable study that compared all the network properties of interest across different national societies. While there were a number of studies that examine different network properties in different networks – e.g. the density of interpersonal networks in Japan or the US – we could find very few studies that compare the network properties of interest across the countries for which we had p and q data. The only usable study was the one by Money et al. (1998), which compared word-of-mouth networks in Japan and the United States. According to data reported by Money et al. (1998), Japanese firms in Japan demonstrated an average tie strength of 7.1, while American firms in the United States demonstrated an average tie strength of 4.5. Viewing the strength of ties as an indicator of embeddedness, we interpret this to mean that Japanese firms are more embedded in their networks than American firms – an assertion that is at least tangentially supported by the extensive literature on Japanese business networks (Gerlach, 1992). Since this was the only suitable data available, we decided to focus the validation attempt on the embeddedness postulate. To strengthen our argument, we decided to identify a suitable proxy that could be as an additional indicator of embeddedness. The best candidate for such a proxy, both conceptually and empirically, appeared to be Hofstede’s (1980) dimensions of cultural values. Based on data from over 116,000 employees of a single company – thus controlling for company culture – Hofstede’s four dimensions of cultural values are generally accepted as explaining differences among national cultures. As network structures are inextricably linked to the culture of the society, we propose that Hofstede’s dimensions of cultural values can be used as a proxy for network properties for the purposes of this analysis. In his research, Hofstede (1980) proposed four dimensions of cultural values: individualism/collectivism, power distance, uncertainty avoidance, and masculinity/femininity. However, we focus on two of these dimensions as being directly relevant to network properties: individualism/collectivism, and uncertainty avoidance. In individualistic countries, the dominant concern of most people is for themselves and their families, rather than others. The individual and his/her rights are highly valued. Collectivist cultures, on the other hand, value the overall good of the group very highly. It is expected that individual interests will be subordinated to the needs of the group. In collectivist countries, people look after each other in exchange for loyalty, emphasize belonging, and often make group decisions (Francesco and Gold, 1998). We argue that collectivist nations will have high-density and high-embeddedness networks, and therefore high q. For example, Chinese society is well known for emphasizing interpersonal relationships as a guiding structure for economic and social organization (Bian, 1997). In contrast, networks in more individualistic nations will be characterized by less dense networks and less social embeddedness. Uncertainty avoidance is related to the preferred amount of structure. Countries with strong uncertainty avoidance will be characterized by greater structure, and explicit rules of behavior. There is usually a greater concern for doing things right, greater risk-aversion, and greater stability in employment relations. Weak uncertainty avoidance, in contrast, is associated with a preference for unstructured situations, greater flexibility of behavioral norms, and a higher incidence of entrepreneurship.
Strong uncertainty avoidance – greater dependence on behavioral modeling – stronger networks – higher density and embeddedness – high q. Juxtaposing our postulates with the data from Money et al. (1998), as well as Hofstede’s (1980) data on cultural dimensions, we would expect to find the following pattern: US: Low tie strength (mean: 4.5); High individualism (score: 91); Low uncertainty avoidance (score: 46)! Less dense and less embedded networks ! Low q Japan: High tie strength (mean: 7.1); Low individualism (score: 46), High uncertainty avoidance (score: 92) ! Denser and more embedded networks ! High q Indeed, comparative data from Takada and Jain (1991) and Helsen et al. (1993) appear to support this expectation. According to Takada and Jain (1991), the US q was lower than the Japanese q for six out of seven products. Although Helsen et al. (1993) did not have country-level data, they reported imitation coefficients for two different “segments” with the US and Japan in separate segments. q for the segment containing the US was lower than that of the segment with Japan for two out of two products reported. The above comparison suggests that embeddedness (and perhaps density) may have a demonstrable impact on imitation coefficient. Given the absence of readily available data, our proxy-based approach addresses only part of our model. Despite these limitations, however, we believe that this analysis serves to strengthen the face validity of our argument. Contributions and implications The network approach to modeling diffusion espoused here seeks to make several contributions to the theoretical literature on diffusion. As pointed out in our introduction, previous treatments, while acknowledging the role of network communication in diffusion, have dealt with the issue largely by means of an unexpanded parameter (Iacobucci, 1996) that throws no further light on the process of diffusion through networks. By directly modeling the link between network structural properties and diffusion curve parameters, we explicitly incorporate the network process and “unpack” the parameters to demonstrate the effects of network structure on the diffusion of innovation. Further, by investigating the diffusion effects of relatively recent network constructs such as constraint and embeddedness, we add value to current network models of diffusion, represented by the work of authors such as Rogers and Kincaid (1981) and Valente (1995). Finally, this paper bridges the hitherto largely isolated traditions of diffusion networks (Coleman et al., 1966; Rogers and Kincaid, 1981) and the mathematical modeling of diffusion (Bass, 1969). While researchers in these two traditions have acknowledged each other (Iacobucci, 1996), we believe that our paper is an early attempt to directly and comprehensively model the diffusion of innovation in terms of network structure. Research directions. The current conceptual and analytical investigation also points the way toward a promising line of future research. These research implications may be classified into three areas: empirical demonstration, assessment of explanatory potential, and theoretical elaboration. The first important task would be to subject the postulates outlined here and embodied in the model to rigorous empirical testing. Part of such an empirical testing program may be to compare and contrast the predictions and effectiveness of the network model as against models of diffusion. In this context,
The impact of network structure 255
EJIM 8,2
256
it must be noted that the network properties themselves are well-founded in the empirical literature on networks – e.g. centrality (Burkhardt and Brass, 1990), constraint (Burt, 1992; Krackhardt, 1995), range (Burt, 1983a), density (Marsden, 1990), centralization (Madhavan et al. (1998)), and embeddedness (Uzzi, 1997) – and do not pose significant measurement challenges. Second, the explanatory potential of our approach may be assessed, in part, by its utility in explaining diffusion effects in multiple settings. As an example, our approach leads to the postulate that possible differences in diffusion curves among nations – as have been demonstrated by Takada and Jain (1991) – may be explained by differences in social network structure. For instance, it may be proposed that societies such as that of China, known for (1) its emphasis on interpersonal relationships as a guiding structure in economic and social organization (Bian, 1997); and (2) dense family and friendship relations, will display higher network densities and hence higher imitation potential. Empirically validating the resultant set of postulates may pose a fruitful line of future inquiry. Finally, future empirical and theoretical work could also be helpful in refining and further elaborating the model. For example, empirical investigation of the proposed effects in a variety of network and product settings (e.g. consumer products in a network of individuals and industrial products in a network of organizations) could potentially lead to a contingency model of diffusion in networks. Another key aspect of such elaboration would be to specify how the intrinsic qualities of the innovation interact with network properties in determining the diffusion pattern. Yet another promising avenue of research is to investigate the dynamics of network evolution and how they effect diffusion. For example, while structure clearly influences diffusion, the diffusion process in turn may influence structure, thus leading to a two-way causal relationship between structure and action as modeled in Giddens’ (1984) structuration theory. Implications for practice. Significant managerial benefits stem from the model as well. Most importantly, it would allow the marketing manager to potentially use the network properties as “levers” so as to influence the diffusion process at a fundamental level. By implication (i.e. since they do not explicitly incorporate network structure), current diffusion models appear to assume network structure as given. Such a treatment is consistent with our formulation, in which marketing action moderates the relationship between the innovation and imitation potentials (INP and IMP) and the realized innovation and imitation coefficients ( p and q). However, our model adds the insight that traditional marketing action, while determining the efficiency with which the potential is realized, cannot fundamentally increase the innovation and imitation potential – p and q will always be bounded by INP and IMP, respectively. The only way to increase INP and IMP is to change the network structure itself. Consider an example: Ciba-Geigy’s Agricultural Division had invested $12.5 million in the development of a new herbicide, Dual (HBS case #9-582-026, 1982). Faced with the problem of accelerating the product’s rate of adoption, Ciba-Geigy used a communications technology offered by TeleSession of New York City. TeleSession was a marketing service aimed at accelerating the adoption of new products by
bringing together opinion leaders, users, and potential users via teleconferences. What TeleSession effectively did was to help Ciba-Geigy reengineer the structure of the network of potential users so as to speed up the diffusion of Dual. A model explicitly relating network structure properties to diffusion curve parameters, as we propose, would enable managers to determine what modifications to the structure would be most beneficial. Thus, the key managerial implication is that the network structure should be viewed as being potentially under managerial control. For example, a marketing manager may seek to increase the centrality of her clients by assisting them in forming strategic relationships within the network. Thus, managerial action aimed at changing the underlying network structure becomes a significant new tool to facilitate the diffusion process, as in the Ciba-Geigy example[2]. In this context, our model helps in two specific ways. First, it clearly identifies the network properties that influence the innovation and imitation coefficients, so that managerial action may be appropriately targeted. Second, our model points the way to estimating the cost and return associated with changing network properties. Thus, the marketing manager may take an informed decision as to where scarce resources can be most effectively employed. In a competitive context where many marketers may be trying to re-engineer customer networks, such an ability to fine-tune network re-engineering may be a significant strategic capability. A further suggestion stemming from the model is a conception of “product-specific” augmented networks that may be different from the basic communication network (Midgley et al., 1992). Taking the basic communication network as a given, a manager may consider how to augment it with product-specific communication links so as to influence the diffusion of the innovation. In such a case, the diffusion model parameters would be determined not by the structural properties of the basic communication network, but by those of the augmented network. By highlighting the differential impact of the properties of basic and augmented networks, our model can help provide a more sophisticated understanding of the impact of network structure and ultimately of how the diffusion process works in a given market. Thus, the current model can be a potentially valuable addition to the market research toolkit. Finally, our approach provides a basis for estimating the diffusion curve parameters in situations dealing with radically new products where prior experience with marketing mix variables may not be widely available. Thus, with radical innovations, knowledge of the underlying network structure may provide a better guide to diffusion than extrapolating from experience with more familiar, although less similar, products. A key point here is that traditional models of diffusion estimate p and q post hoc, based on experience with product performance (Mahajan et al., 1990). In contrast, the network model makes it possible to prospectively estimate the diffusion curve parameters, based on the structural properties of the network, which may be measured even before the product is launched. There may also be issues of cost-benefit tradeoffs that are relevant. Network changes are not without cost, and may call for financial investment and human adjustment. Thus, the desired changes in network structure have to be evaluated in terms of the cost of bringing them about, as our model does. In sum, the model presented here could form the basis for potentially very valuable managerial tools with which to understand and influence the diffusion of new products and technologies.
The impact of network structure 257
EJIM 8,2
It should also be pointed out that our model brings out some ethical challenges to the manager who may be interested in re-engineering his customer network. If human relationships are the basis for enduring social networks, especially in a consumer market, how appropriate it is to talk in terms of “engineering?” This is an issue that deserves to be considered in depth, although any meaningful discussion of the topic is beyond the scope of this paper.
258 Conclusion The contributions and implications outlined above serve to demonstrate the potential value of explicitly modeling diffusion curve parameters in terms of underlying network structure. This paper sought to add value to the diffusion and network literature in marketing in three ways: (1) by making it possible to infer diffusion potential from directly measurable network properties; (2) by “unpacking” the diffusion process into innovation and imitation processes that form the building blocks of contagion; and (3) by developing a holistic structural model of network diffusion which integrates the several network properties that have hitherto been studied separately. Apart from providing an alternative approach to the estimation of diffusion curve parameters, our method offers greater managerial leverage by identifying concrete ways in which the network may be “re-engineered” in order to facilitate diffusion. The fact that diffusion curve parameters may be prospectively estimated from network structure also lends a distinct advantage over traditional models in which p and q are estimated post hoc. In the current business and societal context of rapid technological change and continuous innovation, research interest in diffusion processes remains strong (Rogers, 2003). Especially in the high-technology sectors of the economy, such as the internet and telecommunications, both networks and the diffusion of innovations are central issues. Against this background, the model outlined here is intended to pave the way towards a rigorous integrated treatment of networks and the mathematical modeling of diffusion. Notes 1. This is the reason why we have crafted our arguments in terms of postulates (which, following Webster’s Dictionary, we take to mean established rules or principles which form the premise for a train of reasoning), rather than in the traditional form of hypotheses. Our position is that both conceptual and empirical support is already available for each of the postulates taken separately; thus, our concern is neither to present new theoretically derived hypotheses nor to lay the groundwork for empirical testing. Rather, we take the postulates as already enjoying some measure of empirical support, and hence justifying their use as premises in our task of developing an integrative analytical model. Thus, our purpose in briefly re-stating the arguments leading up to the postulates is only to summarize the arguments that already exist with some empirical support. This is not to say that all empirical issues have been settled with respect to the postulated relationships, simply that our goal in this paper is to advance the analytical modeling effort rather than the agenda of empirically testing theoretically derived hypotheses. 2. In a general sense, our equations are relevant to the Ciba Geigy case. TeleSession enabled Ciba Geigy to lower T*. The lowering of T* was accomplished by increasing the average
centrality and the network range of the network. According to our analysis an increase in these two would lead to an increase in the INP, or the innovative potential of the network, which in turn would lead to a larger number of early adoptions, and thus speed up the diffusion. If our model is parameterized, then it can perhaps provide directions for resource allocations. The details provided in the HBS case on Ciba Geigy does not permit such a calibration. However, such a calibration may be carried out in future applications.
The impact of network structure 259
References Abrahamson, E. and Rosenkopf, L. (1997), “Social network effects on the extent of innovation diffusion: a computer simulation”, Organization Science, Vol. 8 No. 3, pp. 289-309. Ahuja, G. (1998), “Collaboration networks, structural holes, and innovation: a longitudinal study”, working paper series. Bailey, N.T.J. (1975), The Mathematical Theory of Infectious Diseases and its Applications, Charles Griffen, London. Bass, F.M. (1969), “A new product growth model for consumer durables”, Management Science, Vol. 15, pp. 215-27. Becker, M.H. (1970), “Sociometric location and innovativeness: reformulation and extension of the diffusion model”, American Sociological Review, Vol. 35, pp. 267-82. Bian, Y. (1997), “Bringing strong ties back in: indirect ties, network bridges, and job searches in China”, American Sociological Review, Vol. 62, pp. 366-85. Black, F.L. (1966), “Measles endemicity in insular populations: critical community size and its evolutionary implication”, Journal of Theoretical Biology, Vol. 11, pp. 207-11. Bretschneider, S.I. and Bozeman, B. (1986), “Adaptive diffusion models for the growth of robotics in New York state industry”, Technological Forecasting and Social Change, Vol. 30, pp. 111-21. Burkhardt, M.E. and Brass, D.J. (1990), “Changing patterns of change: the effects of a technology on social network structure and power”, Administrative Science Quarterly, Vol. 35, pp. 104-27. Burt, R.S. (1983a), “Range”, in Burt, R.S. and Minor, M.J. (Eds), Applied Network Analysis: A Methodological Introduction, Sage, Beverly Hills, CA. Burt, R.S. (1983b), Toward a Structural Theory of Action: Network Models of Social Structure, Perception, and Action, Academic Press, New York, NY. Burt, R.S. (1987), “Social contagion and innovation: cohesion versus structural equivalence”, American Journal of Sociology, Vol. 92, pp. 1287-335. Burt, R.S. (1991), STRUCTURE: Reference Manual, Columbia University, New York, NY. Burt, R.S. (1992), Structural Holes: The Social Structure of Competition, Harvard University Press, Cambridge, MA. Burt, R.S. (1997), “The contingent value of social capital”, Administrative Science Quarterly, Vol. 42, pp. 339-65. Burt, R.S. (1998), “The network structure of social capital”, paper presented at the Social Networks and Social Capital Conference at Duke University. Coleman, J.S., Katz, E. and Menzel, H. (1966), Medical Innovation: A Diffusion Study, Bobbs Merrill, New York, NY. Epstein, A.L. (1961), “The network and urban social organization”, Rhodes-Livingstone Journal, Vol. 29, pp. 29-62, cited in Rogers, E.M. and Kincaid, D.L. (1981), Communication Networks: Toward a New Paradigm for Research, Free Press, New York, NY.
EJIM 8,2
260
Francesco, A.M. and Gold, B.A. (1998), International Organizational Behavior, Prentice-Hall, Upper Saddle River, NJ. Freeman, L.C. (1979), “Centrality in social networks: conceptual clarification”, Social Networks, Vol. 1, pp. 215-39. Fenech, T. and O’Cass, A. (2001), “Internet users’ adoption of web retailing: user and product dimensions”, Journal of Product and Brand Management, Vol. 10 No. 6, pp. 362-81. Galaskiewicz, J. (1979), Exchange Networks and Community Politics, Sage, Beverly Hills, CA. Ganesh, J., Kumar, V. and Subramaniam, V. (1997), “Learning effect in multinational diffusion of consumer durables: an exploratory investigation”, Journal of the Academy of Marketing Science, Vol. 25 No. 3, pp. 214-28. Gerlach, M.L. (1992), “The Japanese corporate network: a blockmodel analysis”, Administrative Science Quarterly, Vol. 37, pp. 105-39. Giddens, A. (1984), The Constitution of Society: Outline of the Theory of Structuration, University of California Press, Berkeley, CA. Granovetter, M. (1973), “The strength of weak ties”, American Journal of Sociology, Vol. 78, pp. 1360-80. Granovetter, M. (1974), Getting a Job: A Study of Contacts and Careers, Harvard University Press, Cambridge, MA. Granovetter, M. (1985), “Economic action and social embeddedness”, American Journal of Sociology, Vol. 91, pp. 481-510. HBS Case #9-582-026 (1982), CIBA-GEIGY Agricultural Division, Harvard Business School Publishing, Boston, MA. Helsen, K., Jedidi, K. and DeSarbo, W.S. (1993), “A new approach to country segmentation utilizing multinational diffusion patterns”, Journal of Marketing, Vol. 57, pp. 60-71. Hofstede, G.H. (1980), “Motivation, leadership, and organization: do American theories apply abroad?”, Organizational Dynamics. Iacobucci, D. (1996), “Concerning the diffusion of network models in marketing”, Journal of Marketing, Vol. 60 No. 3, pp. 134-5. Iacobucci, D. and Hopkins, N. (1992), “Modeling dyadic interactions and networks in marketing”, Journal of Marketing Research, Vol. XXIX, pp. 5-17. Ibarra, H. (1993), “Network centrality, power, and innovation involvement: determinants of technical and administrative roles”, Academy of Management Journal, Vol. 36 No. 3, pp. 471-501. Kelley, M.R. and Brooks, H. (1991), “External learning opportunities and the diffusion of process innovations to small firms”, Technological Forecasting and Social Change, Vol. 39, pp. 103-25. Kraatz, M.S. (1998), “Learning by association? Interorganizational networks and adaptation to environmental change”, Academy of Management Journal, Vol. 41 No. 6, pp. 621-43. Krassa, M.A. (1988), “Social groups, selective perception, and behavioral contagion in public opinion”, Social Networks, Vol. 10, pp. 109-36. Krackhardt, D. (1995), “Entrepreneurial opportunities in an entrepreneurial firm: a structural approach”, Entrepreneurship Theory & Practice, Vol. 19, pp. 53-69. Lilien, G.L., Kotler, P. and Moorthy, K.S. (1992), Marketing Models, Prentice-Hall, Englewood Cliffs, NJ. Madhavan, R. (1996), “Strategic flexibility and performance in the global steel industry: the role of interfirm linkages”, unpublished dissertation, University of Pittsburgh, Pittsburgh, PA.
Madhavan, R., Koka, B.R. and Prescott, J.E. (1998), “Networks in transition: how industry events (re)shape interfirm relationships”, Strategic Management Journal, Vol. 19 No. 5, pp. 439-59. Mahajan, V., Muller, E. and Bass, F.M. (1990), “New product diffusion models in marketing: a review and directions for research”, Journal of Marketing, Vol. 54, pp. 1-26. Marsden, P.V. (1990), “Network data and measurement”, Annual Review of Sociology, Vol. 16, pp. 435-63. Midgley, D.F., Morrison, P.D. and Roberts, J.H. (1992), “The effect of network structure in industrial diffusion processes”, Research Policy, Vol. 21, pp. 533-52. Mizruchi, M.S. (1992), The Structure of Corporate Political Action, Harvard University Press, Cambridge, MA. Money, R.B., Gilly, M.C. and Graham, J.L. (1998), “Explorations of national culture and word-of-mouth referral behavior in the purchase of industrial services in the United States and Japan”, Journal of Marketing, Vol. 62 No. 4, pp. 76-87. Morris, A. (1981), “Black Southern sit-in movement: an analysis of internal organization”, American Sociological Review, Vol. 46, pp. 744-67. Nohria, N. and Gulati, R. (1996), “Is slack good or bad for innovation?”, Academy of Management Journal, Vol. 39 No. 5, pp. 1245-64. Pfeffer, J. and Salancik, G.R. (1978), The External Control of Organizations, Harper & Row, New York, NY. Podolny, J.E. and Baron, J.N. (1997), “Relationships and resources: social networks and mobility in the workplace”, American Sociological Review, Vol. 62, pp. 673-93. Porter, M.E. (1990), The Competitive Advantage of Nations, Free Press, New York, NY. Rogers, E.M. (2003), Diffusion of Innovations, 5th ed., Free Press, New York, NY. Rogers, E.M. (2004a), “The diffusion of innovations model and marketing”, paper presented at the 16th Paul D. Converse Marketing Award Symposium, 30 April-2 May, University of Illinois at Urbana-Champaign, Urbana-Champaign, IL. Rogers, E.M. (2004b), “A perspective and retrospective look at the diffusion model”, Journal of Health Communication, Vol. 9 No. 51, pp. 13-19. Rogers, E.M. and Kincaid, D.L. (1981), Communication Networks: Toward a New Paradigm for Research, Free Press, New York, NY. Scott, J. (1991), Network Analysis: A Handbook, Sage, Newbury Park, CA. Simon, H.A. (1985), “What we know about the creative process”, in Kuhn, R.L. (Ed.), Frontiers in Creative and Innovative Management, Ballinger, Cambridge, MA. Singhal, A. and Rogers, E.M. (2003), Combating AIDS: Communication Strategies in Action, Sage, New Delhi. Strang, D. and Soule, S.A. (1998), “Diffusion in organizations and social movements: from hybrid corn to poison pills”, Annual Review of Sociology, Vol. 24 No. 1998, pp. 265-90. Takada, H. and Jain, D (1991), “Cross-national analysis of diffusion of consumer durable goods in Pacific Rim countries”, Journal of Marketing, Vol. 55, pp. 48-54. Uzzi, B. (1997), “Social structure and competition in interfirm networks: the paradox of embeddedness”, Administrative Science Quarterly, Vol. 42, pp. 35-67. Valente, T.W. (1995), Network Models of the Diffusion of Innovations, Hampton Press, Cresskill, NJ.
The impact of network structure 261
EJIM 8,2
262
Valente, T.W. and Rogers, E.M. (1995), “The origins and development of the diffusion of innovations: paradigm as an example of scientific growth”, Science Communication, Vol. 1,. von Hippel, E. (1988), The Sources of Innovation, Oxford University Press, New York, NY. Wellman, B. (1988), “Structural analysis: from method and metaphor to theory and substance”, in Barry, W. and Berkowitz, S.D. (Eds), Social Structures: A Network Approach, Cambridge University Press, New York, NY. Wolfeiler, D. (1998), “Community organizing and community building among gay and bisexual men: The STOP AIDS project”, in Minkler, M. (Ed.), Community Organizing and Community Building for Health, Rutgers University Press, New Brunswick, NJ, pp. 230-43. Further reading Sultan, F., Farley, J.U. and Lehman, D.R. (1990), “A meta-analysis of diffusion models”, Journal of Marketing Research, pp. 70-7. (Ben Shaw-Ching Liu is Associate Professor of Marketing at the College of Business Administration, Butler University, Indianapolis, IN 46208. He holds a PhD from the State University of New York at Buffalo. He was Assistant Professor at University of Illinois, Urbana-Champaign 1991 – 1999. His major research interest is in the areas of service marketing, cross-cultural effects, negotiation behaviors of buyers and sellers, distribution channel member relationships, and positioning strategy in new product development. He is on the editorial board of the Journal of Service Research. Ravindranath (“Ravi”) Madhavan is Associate Professor of Business Administration in the Katz Graduate School of Business, University of Pittsburgh, 236 Mervis Hall, Pittsburgh, PA 15260. Phone (412)648-1530, email
[email protected]. He earned his PhD in strategic management at the University of Pittsburgh in 1996. His research is aimed at understanding the effect of inter-firm networks on competitive advantage. D. Sudharshan is Dean and Professor of Gatton College of Business and Economics, University of Kentucky, 255F Business and Economics Bldg., Lexington, KY 40506-0034, Tel: (859)257-8939, Fax: (859)257-8938, E-mail:
[email protected]. He holds a PhD from the University of Pittsburgh. His research is in the areas of marketing strategy, new product development, and marketing technology management. He is on the editorial boards of the Journal of Marketing and the Journal of Market Focused Management.)